Test Report: Docker_Linux_crio 21767

                    
                      792b73f7e6a323c75f1a3ad863987d7e01fd8059:2025-10-25:42055
                    
                

Test fail (38/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.29
35 TestAddons/parallel/Registry 14.58
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 149.49
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 5.34
41 TestAddons/parallel/CSI 45.62
42 TestAddons/parallel/Headlamp 2.75
43 TestAddons/parallel/CloudSpanner 5.29
44 TestAddons/parallel/LocalPath 10.18
45 TestAddons/parallel/NvidiaDevicePlugin 6.29
46 TestAddons/parallel/Yakd 5.28
47 TestAddons/parallel/AmdGpuDevicePlugin 6.29
97 TestFunctional/parallel/ServiceCmdConnect 603.1
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.69
134 TestFunctional/parallel/ImageCommands/ImageListShort 2.27
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
153 TestFunctional/parallel/ServiceCmd/Format 0.56
154 TestFunctional/parallel/ServiceCmd/URL 0.56
190 TestJSONOutput/pause/Command 2.5
196 TestJSONOutput/unpause/Command 2.19
262 TestPause/serial/Pause 7.5
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.91
351 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.6
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.27
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.48
366 TestStartStop/group/newest-cni/serial/Pause 7.05
374 TestStartStop/group/old-k8s-version/serial/Pause 6.49
379 TestStartStop/group/no-preload/serial/Pause 5.96
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.13
384 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.44
391 TestStartStop/group/embed-certs/serial/Pause 5.84
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable volcano --alsologtostderr -v=1: exit status 11 (292.011965ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:34:49.144101  335195 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:34:49.145165  335195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:49.145187  335195 out.go:374] Setting ErrFile to fd 2...
	I1025 09:34:49.145194  335195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:49.145435  335195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:34:49.145767  335195 mustload.go:65] Loading cluster: addons-582494
	I1025 09:34:49.146158  335195 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:49.146173  335195 addons.go:606] checking whether the cluster is paused
	I1025 09:34:49.146257  335195 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:49.146271  335195 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:34:49.146678  335195 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:34:49.166142  335195 ssh_runner.go:195] Run: systemctl --version
	I1025 09:34:49.166207  335195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:34:49.185149  335195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:34:49.286792  335195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:34:49.286912  335195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:34:49.318819  335195 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:34:49.318841  335195 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:34:49.318846  335195 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:34:49.318848  335195 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:34:49.318851  335195 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:34:49.318854  335195 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:34:49.318857  335195 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:34:49.318859  335195 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:34:49.318861  335195 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:34:49.318869  335195 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:34:49.318872  335195 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:34:49.318874  335195 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:34:49.318876  335195 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:34:49.318879  335195 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:34:49.318881  335195 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:34:49.318892  335195 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:34:49.318899  335195 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:34:49.318904  335195 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:34:49.318907  335195 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:34:49.318909  335195 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:34:49.318912  335195 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:34:49.318914  335195 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:34:49.318916  335195 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:34:49.318919  335195 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:34:49.318921  335195 cri.go:89] found id: ""
	I1025 09:34:49.318965  335195 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:34:49.335914  335195 out.go:203] 
	W1025 09:34:49.337012  335195 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:34:49.337039  335195 out.go:285] * 
	* 
	W1025 09:34:49.360631  335195 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:34:49.362381  335195 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.755026ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002702113s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004085322s
addons_test.go:392: (dbg) Run:  kubectl --context addons-582494 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-582494 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-582494 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.089706946s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 ip
2025/10/25 09:35:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable registry --alsologtostderr -v=1: exit status 11 (260.482041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:12.624427  337667 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:12.625391  337667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:12.625404  337667 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:12.625408  337667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:12.625647  337667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:12.625943  337667 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:12.626408  337667 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:12.626436  337667 addons.go:606] checking whether the cluster is paused
	I1025 09:35:12.626545  337667 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:12.626561  337667 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:12.626986  337667 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:12.645568  337667 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:12.645629  337667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:12.664960  337667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:12.766871  337667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:12.766974  337667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:12.799142  337667 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:12.799162  337667 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:12.799166  337667 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:12.799170  337667 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:12.799173  337667 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:12.799183  337667 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:12.799187  337667 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:12.799189  337667 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:12.799192  337667 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:12.799198  337667 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:12.799201  337667 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:12.799209  337667 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:12.799217  337667 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:12.799220  337667 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:12.799222  337667 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:12.799229  337667 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:12.799235  337667 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:12.799238  337667 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:12.799241  337667 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:12.799243  337667 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:12.799245  337667 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:12.799248  337667 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:12.799250  337667 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:12.799253  337667 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:12.799255  337667 cri.go:89] found id: ""
	I1025 09:35:12.799294  337667 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:12.815389  337667 out.go:203] 
	W1025 09:35:12.816635  337667 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:12.816658  337667 out.go:285] * 
	* 
	W1025 09:35:12.819864  337667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:12.821228  337667 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.475678ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-582494
addons_test.go:332: (dbg) Run:  kubectl --context addons-582494 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.949915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:15.408764  338334 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:15.409804  338334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:15.409819  338334 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:15.409823  338334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:15.410059  338334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:15.410363  338334 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:15.410740  338334 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:15.410757  338334 addons.go:606] checking whether the cluster is paused
	I1025 09:35:15.410841  338334 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:15.410853  338334 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:15.411215  338334 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:15.429640  338334 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:15.429709  338334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:15.448478  338334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:15.552495  338334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:15.552580  338334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:15.584402  338334 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:15.584423  338334 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:15.584427  338334 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:15.584430  338334 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:15.584433  338334 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:15.584437  338334 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:15.584439  338334 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:15.584442  338334 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:15.584444  338334 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:15.584449  338334 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:15.584452  338334 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:15.584454  338334 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:15.584457  338334 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:15.584467  338334 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:15.584472  338334 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:15.584485  338334 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:15.584490  338334 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:15.584493  338334 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:15.584495  338334 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:15.584498  338334 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:15.584508  338334 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:15.584511  338334 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:15.584513  338334 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:15.584516  338334 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:15.584519  338334 cri.go:89] found id: ""
	I1025 09:35:15.584556  338334 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:15.599478  338334 out.go:203] 
	W1025 09:35:15.600765  338334 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:15.600792  338334 out.go:285] * 
	* 
	W1025 09:35:15.603910  338334 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:15.605246  338334 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-582494 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-582494 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-582494 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5c019257-8d8e-4a6e-9c63-e35b01f144b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5c019257-8d8e-4a6e-9c63-e35b01f144b6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003749091s
I1025 09:35:21.647682  325455 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.737721286s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-582494 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-582494
helpers_test.go:243: (dbg) docker inspect addons-582494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227",
	        "Created": "2025-10-25T09:32:40.58689965Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:32:40.62334152Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/hostname",
	        "HostsPath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/hosts",
	        "LogPath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227-json.log",
	        "Name": "/addons-582494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-582494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-582494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227",
	                "LowerDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-582494",
	                "Source": "/var/lib/docker/volumes/addons-582494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-582494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-582494",
	                "name.minikube.sigs.k8s.io": "addons-582494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4020f793b3162eb0bb0e79b3984f3c5aad4f6a54e19a76f9936eb27f065c6406",
	            "SandboxKey": "/var/run/docker/netns/4020f793b316",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-582494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:49:78:8e:1e:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09e159133a12c1c4eda5dd1d02a15878cfb36d205e857ff1b7046b1a63057f54",
	                    "EndpointID": "79fdf4e4be84bf8a3ba17b737d040f83cd9c9c0902716d626a030afcfde419eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-582494",
	                        "a7ce43851859"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-582494 -n addons-582494
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-582494 logs -n 25: (1.260005923s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-351445 --alsologtostderr --binary-mirror http://127.0.0.1:36611 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-351445 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ -p binary-mirror-351445                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-351445 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p addons-582494                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ addons  │ disable dashboard -p addons-582494                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p addons-582494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ addons-582494 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-582494 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ addons  │ enable headlamp -p addons-582494 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-582494 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ssh     │ addons-582494 ssh cat /opt/local-path-provisioner/pvc-122ff3f1-ae75-4f96-94d0-2db3ca74ea0b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-582494 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ip      │ addons-582494 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-582494 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-582494                                                                                                                                                                                                                                                                                                                                                                                           │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-582494 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ssh     │ addons-582494 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-582494 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ip      │ addons-582494 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-582494        │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:18.278840  326776 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:18.278978  326776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:18.279002  326776 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:18.279008  326776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:18.279258  326776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:32:18.279832  326776 out.go:368] Setting JSON to false
	I1025 09:32:18.280773  326776 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4487,"bootTime":1761380251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:32:18.280876  326776 start.go:141] virtualization: kvm guest
	I1025 09:32:18.283020  326776 out.go:179] * [addons-582494] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:32:18.284535  326776 notify.go:220] Checking for updates...
	I1025 09:32:18.284574  326776 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:32:18.286058  326776 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:18.287667  326776 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:32:18.289062  326776 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:32:18.290413  326776 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:32:18.291754  326776 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:18.293290  326776 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:18.318526  326776 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:32:18.318679  326776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:18.383598  326776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:18.372404563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:18.383731  326776 docker.go:318] overlay module found
	I1025 09:32:18.385723  326776 out.go:179] * Using the docker driver based on user configuration
	I1025 09:32:18.387147  326776 start.go:305] selected driver: docker
	I1025 09:32:18.387163  326776 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:18.387175  326776 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:18.387762  326776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:18.451941  326776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:18.441061842 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:18.452118  326776 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:18.452296  326776 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:32:18.454157  326776 out.go:179] * Using Docker driver with root privileges
	I1025 09:32:18.455819  326776 cni.go:84] Creating CNI manager for ""
	I1025 09:32:18.455885  326776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:18.455897  326776 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:18.455980  326776 start.go:349] cluster config:
	{Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 09:32:18.457503  326776 out.go:179] * Starting "addons-582494" primary control-plane node in "addons-582494" cluster
	I1025 09:32:18.458825  326776 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:18.460205  326776 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:18.461517  326776 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:18.461569  326776 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:32:18.461583  326776 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:18.461680  326776 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:32:18.461679  326776 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:18.461695  326776 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:18.462020  326776 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/config.json ...
	I1025 09:32:18.462051  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/config.json: {Name:mkb06601fc8d67ab1feb33e8665675381486554a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:18.481608  326776 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:18.481780  326776 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:32:18.481812  326776 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:32:18.481820  326776 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:32:18.481832  326776 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:32:18.481840  326776 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:32:32.779270  326776 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:32:32.779308  326776 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:32.779398  326776 start.go:360] acquireMachinesLock for addons-582494: {Name:mk7ae4df9f0d4b2c8062e32fc416860ac419156c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:32.779540  326776 start.go:364] duration metric: took 110.573µs to acquireMachinesLock for "addons-582494"
	I1025 09:32:32.779578  326776 start.go:93] Provisioning new machine with config: &{Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:32:32.779671  326776 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:32:32.781585  326776 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:32:32.781845  326776 start.go:159] libmachine.API.Create for "addons-582494" (driver="docker")
	I1025 09:32:32.781876  326776 client.go:168] LocalClient.Create starting
	I1025 09:32:32.782047  326776 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 09:32:32.853582  326776 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 09:32:33.251807  326776 cli_runner.go:164] Run: docker network inspect addons-582494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:32:33.269558  326776 cli_runner.go:211] docker network inspect addons-582494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:32:33.269645  326776 network_create.go:284] running [docker network inspect addons-582494] to gather additional debugging logs...
	I1025 09:32:33.269673  326776 cli_runner.go:164] Run: docker network inspect addons-582494
	W1025 09:32:33.287793  326776 cli_runner.go:211] docker network inspect addons-582494 returned with exit code 1
	I1025 09:32:33.287831  326776 network_create.go:287] error running [docker network inspect addons-582494]: docker network inspect addons-582494: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-582494 not found
	I1025 09:32:33.287845  326776 network_create.go:289] output of [docker network inspect addons-582494]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-582494 not found
	
	** /stderr **
	I1025 09:32:33.287938  326776 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:33.306521  326776 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018ec860}
	I1025 09:32:33.306574  326776 network_create.go:124] attempt to create docker network addons-582494 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:32:33.306624  326776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-582494 addons-582494
	I1025 09:32:33.370986  326776 network_create.go:108] docker network addons-582494 192.168.49.0/24 created
	I1025 09:32:33.371015  326776 kic.go:121] calculated static IP "192.168.49.2" for the "addons-582494" container
	I1025 09:32:33.371074  326776 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:32:33.388528  326776 cli_runner.go:164] Run: docker volume create addons-582494 --label name.minikube.sigs.k8s.io=addons-582494 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:32:33.411916  326776 oci.go:103] Successfully created a docker volume addons-582494
	I1025 09:32:33.412014  326776 cli_runner.go:164] Run: docker run --rm --name addons-582494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-582494 --entrypoint /usr/bin/test -v addons-582494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:32:35.926085  326776 cli_runner.go:217] Completed: docker run --rm --name addons-582494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-582494 --entrypoint /usr/bin/test -v addons-582494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.513990099s)
	I1025 09:32:35.926126  326776 oci.go:107] Successfully prepared a docker volume addons-582494
	I1025 09:32:35.926144  326776 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:35.926169  326776 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:32:35.926282  326776 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-582494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:32:40.514678  326776 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-582494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.588340028s)
	I1025 09:32:40.514719  326776 kic.go:203] duration metric: took 4.588547297s to extract preloaded images to volume ...
	W1025 09:32:40.514827  326776 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:32:40.514872  326776 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:32:40.514952  326776 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:32:40.570572  326776 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-582494 --name addons-582494 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-582494 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-582494 --network addons-582494 --ip 192.168.49.2 --volume addons-582494:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:32:40.875070  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Running}}
	I1025 09:32:40.893236  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:32:40.912237  326776 cli_runner.go:164] Run: docker exec addons-582494 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:32:40.960676  326776 oci.go:144] the created container "addons-582494" has a running status.
	I1025 09:32:40.960709  326776 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa...
	I1025 09:32:41.202355  326776 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:32:41.232396  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:32:41.252739  326776 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:32:41.252759  326776 kic_runner.go:114] Args: [docker exec --privileged addons-582494 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:32:41.302258  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:32:41.321107  326776 machine.go:93] provisionDockerMachine start ...
	I1025 09:32:41.321235  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:41.339597  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:41.339892  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:41.339917  326776 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:32:41.485818  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-582494
	
	I1025 09:32:41.485859  326776 ubuntu.go:182] provisioning hostname "addons-582494"
	I1025 09:32:41.485941  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:41.505038  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:41.505295  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:41.505341  326776 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-582494 && echo "addons-582494" | sudo tee /etc/hostname
	I1025 09:32:41.659000  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-582494
	
	I1025 09:32:41.659090  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:41.677749  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:41.677962  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:41.677988  326776 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-582494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-582494/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-582494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:32:41.820552  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:32:41.820581  326776 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 09:32:41.820609  326776 ubuntu.go:190] setting up certificates
	I1025 09:32:41.820625  326776 provision.go:84] configureAuth start
	I1025 09:32:41.820693  326776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-582494
	I1025 09:32:41.839242  326776 provision.go:143] copyHostCerts
	I1025 09:32:41.839369  326776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 09:32:41.839538  326776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 09:32:41.839629  326776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 09:32:41.839705  326776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.addons-582494 san=[127.0.0.1 192.168.49.2 addons-582494 localhost minikube]
	I1025 09:32:42.016963  326776 provision.go:177] copyRemoteCerts
	I1025 09:32:42.017028  326776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:32:42.017065  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.036113  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.139751  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:32:42.161296  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:32:42.180578  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:32:42.198586  326776 provision.go:87] duration metric: took 377.940787ms to configureAuth
	I1025 09:32:42.198616  326776 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:32:42.198807  326776 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:32:42.198910  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.217627  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:42.217913  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:42.217937  326776 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:32:42.483013  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:32:42.483034  326776 machine.go:96] duration metric: took 1.161899629s to provisionDockerMachine
	I1025 09:32:42.483046  326776 client.go:171] duration metric: took 9.701159437s to LocalClient.Create
	I1025 09:32:42.483072  326776 start.go:167] duration metric: took 9.701227109s to libmachine.API.Create "addons-582494"
	I1025 09:32:42.483081  326776 start.go:293] postStartSetup for "addons-582494" (driver="docker")
	I1025 09:32:42.483096  326776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:32:42.483154  326776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:32:42.483195  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.502080  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.605711  326776 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:32:42.609641  326776 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:32:42.609677  326776 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:32:42.609693  326776 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 09:32:42.609770  326776 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 09:32:42.609803  326776 start.go:296] duration metric: took 126.715685ms for postStartSetup
	I1025 09:32:42.610128  326776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-582494
	I1025 09:32:42.628475  326776 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/config.json ...
	I1025 09:32:42.628761  326776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:32:42.628802  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.647151  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.746014  326776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:32:42.750861  326776 start.go:128] duration metric: took 9.971165938s to createHost
	I1025 09:32:42.750894  326776 start.go:83] releasing machines lock for "addons-582494", held for 9.971336583s
	I1025 09:32:42.750963  326776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-582494
	I1025 09:32:42.769421  326776 ssh_runner.go:195] Run: cat /version.json
	I1025 09:32:42.769477  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.769493  326776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:32:42.769564  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.788772  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.789068  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.937282  326776 ssh_runner.go:195] Run: systemctl --version
	I1025 09:32:42.944219  326776 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:32:42.981972  326776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:32:42.986927  326776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:32:42.987004  326776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:32:43.015475  326776 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:32:43.015505  326776 start.go:495] detecting cgroup driver to use...
	I1025 09:32:43.015546  326776 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:32:43.015607  326776 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:32:43.033752  326776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:32:43.046726  326776 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:32:43.046790  326776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:32:43.064699  326776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:32:43.083297  326776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:32:43.164950  326776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:32:43.253090  326776 docker.go:234] disabling docker service ...
	I1025 09:32:43.253160  326776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:32:43.274205  326776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:32:43.288246  326776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:32:43.376132  326776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:32:43.459427  326776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:32:43.473153  326776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:32:43.488537  326776 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:32:43.488597  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.499839  326776 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:32:43.499903  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.509753  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.519069  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.528301  326776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:32:43.536623  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.545490  326776 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.559185  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.568975  326776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:32:43.576788  326776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:32:43.584526  326776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:43.663425  326776 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:32:43.773300  326776 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:32:43.773424  326776 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:32:43.777585  326776 start.go:563] Will wait 60s for crictl version
	I1025 09:32:43.777650  326776 ssh_runner.go:195] Run: which crictl
	I1025 09:32:43.781377  326776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:32:43.807809  326776 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:32:43.807932  326776 ssh_runner.go:195] Run: crio --version
	I1025 09:32:43.839882  326776 ssh_runner.go:195] Run: crio --version
	I1025 09:32:43.872630  326776 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:32:43.873764  326776 cli_runner.go:164] Run: docker network inspect addons-582494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:43.892077  326776 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:32:43.896591  326776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:43.907339  326776 kubeadm.go:883] updating cluster {Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:32:43.907476  326776 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:43.907526  326776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:32:43.943679  326776 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:32:43.943701  326776 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:32:43.943755  326776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:32:43.970106  326776 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:32:43.970137  326776 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:32:43.970146  326776 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 09:32:43.970283  326776 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-582494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:32:43.970374  326776 ssh_runner.go:195] Run: crio config
	I1025 09:32:44.017446  326776 cni.go:84] Creating CNI manager for ""
	I1025 09:32:44.017474  326776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:44.017498  326776 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:32:44.017522  326776 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-582494 NodeName:addons-582494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:32:44.017640  326776 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-582494"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:32:44.017713  326776 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:32:44.026433  326776 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:32:44.026505  326776 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:32:44.034653  326776 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 09:32:44.047572  326776 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:32:44.063444  326776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 09:32:44.076801  326776 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:32:44.080590  326776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:44.090755  326776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:44.175386  326776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:32:44.205262  326776 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494 for IP: 192.168.49.2
	I1025 09:32:44.205290  326776 certs.go:195] generating shared ca certs ...
	I1025 09:32:44.205311  326776 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.205478  326776 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 09:32:44.361003  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt ...
	I1025 09:32:44.361037  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt: {Name:mk8bdce1ee12ddd552187c0d948bc8faa166349d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.362108  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key ...
	I1025 09:32:44.362134  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key: {Name:mkeb028f943d6e5f4c0f71a867aa7d09d82dd086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.362232  326776 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 09:32:44.503228  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt ...
	I1025 09:32:44.503258  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt: {Name:mkdc5eec83a4ed1db9de64e01bce3a9564f328dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.503452  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key ...
	I1025 09:32:44.503463  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key: {Name:mk0132d56842ddb86bc075b013ce7da7228f9954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.503537  326776 certs.go:257] generating profile certs ...
	I1025 09:32:44.503599  326776 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.key
	I1025 09:32:44.503614  326776 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt with IP's: []
	I1025 09:32:44.719874  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt ...
	I1025 09:32:44.719919  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: {Name:mk073c9b62cf012daa3bf0b54b9ac7b3044f5ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.720144  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.key ...
	I1025 09:32:44.720161  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.key: {Name:mkf512384112ba587ed18c996619ac2d8db2d3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.720275  326776 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9
	I1025 09:32:44.720305  326776 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:32:44.925498  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9 ...
	I1025 09:32:44.925530  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9: {Name:mk1e8803e11f4bc0fb40a3388703af7c1ae56fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.926493  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9 ...
	I1025 09:32:44.926522  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9: {Name:mk00f6aca9f2620b0fdaa9ab574e1849f36a5262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.926694  326776 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt
	I1025 09:32:44.926803  326776 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key
	I1025 09:32:44.926876  326776 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key
	I1025 09:32:44.926904  326776 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt with IP's: []
	I1025 09:32:44.975058  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt ...
	I1025 09:32:44.975096  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt: {Name:mk7bf22fc168f20e56262dafad777ba2ef7c0f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.975340  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key ...
	I1025 09:32:44.975367  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key: {Name:mk3a2a3843a6ae3d2d57e4ea396646616192104d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.975674  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:32:44.975723  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:32:44.975761  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:32:44.975809  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 09:32:44.976582  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:32:44.996488  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:32:45.014945  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:32:45.033076  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:32:45.051471  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:32:45.069817  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:32:45.088373  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:32:45.106990  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:32:45.125231  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:32:45.146750  326776 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:32:45.160349  326776 ssh_runner.go:195] Run: openssl version
	I1025 09:32:45.167211  326776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:32:45.179663  326776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:45.184080  326776 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:45.184153  326776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:45.219141  326776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:32:45.228985  326776 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:32:45.233364  326776 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:32:45.233427  326776 kubeadm.go:400] StartCluster: {Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:45.233538  326776 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:32:45.233602  326776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:32:45.261845  326776 cri.go:89] found id: ""
	I1025 09:32:45.261928  326776 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:32:45.270470  326776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:32:45.278750  326776 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:32:45.278821  326776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:32:45.287169  326776 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:32:45.287187  326776 kubeadm.go:157] found existing configuration files:
	
	I1025 09:32:45.287234  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:32:45.295487  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:32:45.295560  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:32:45.303554  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:32:45.311732  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:32:45.311800  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:32:45.319492  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:32:45.327690  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:32:45.327740  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:32:45.335500  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:32:45.343600  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:32:45.343699  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:32:45.351592  326776 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:32:45.413863  326776 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:32:45.473006  326776 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:32:55.417158  326776 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:32:55.417213  326776 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:32:55.417352  326776 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:32:55.417404  326776 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:32:55.417458  326776 kubeadm.go:318] OS: Linux
	I1025 09:32:55.417513  326776 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:32:55.417559  326776 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:32:55.417601  326776 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:32:55.417668  326776 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:32:55.417722  326776 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:32:55.417804  326776 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:32:55.417875  326776 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:32:55.417945  326776 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:32:55.418032  326776 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:32:55.418122  326776 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:32:55.418212  326776 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:32:55.418337  326776 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:32:55.419967  326776 out.go:252]   - Generating certificates and keys ...
	I1025 09:32:55.420043  326776 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:32:55.420123  326776 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:32:55.420195  326776 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:32:55.420245  326776 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:32:55.420297  326776 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:32:55.420375  326776 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:32:55.420423  326776 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:32:55.420521  326776 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-582494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:32:55.420566  326776 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:32:55.420671  326776 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-582494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:32:55.420731  326776 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:32:55.420802  326776 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:32:55.420842  326776 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:32:55.420893  326776 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:32:55.420945  326776 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:32:55.420998  326776 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:32:55.421046  326776 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:32:55.421107  326776 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:32:55.421167  326776 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:32:55.421308  326776 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:32:55.421429  326776 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:32:55.422654  326776 out.go:252]   - Booting up control plane ...
	I1025 09:32:55.422773  326776 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:32:55.422900  326776 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:32:55.423015  326776 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:32:55.423226  326776 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:32:55.423386  326776 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:32:55.423532  326776 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:32:55.423619  326776 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:32:55.423659  326776 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:32:55.423817  326776 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:32:55.423973  326776 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:32:55.424047  326776 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001178678s
	I1025 09:32:55.424182  326776 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:32:55.424302  326776 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:32:55.424443  326776 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:32:55.424552  326776 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:32:55.424664  326776 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.374388763s
	I1025 09:32:55.424763  326776 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.787703669s
	I1025 09:32:55.424871  326776 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502158288s
	I1025 09:32:55.425012  326776 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:32:55.425167  326776 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:32:55.425246  326776 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:32:55.425520  326776 kubeadm.go:318] [mark-control-plane] Marking the node addons-582494 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:32:55.425605  326776 kubeadm.go:318] [bootstrap-token] Using token: i5mo7j.cxciqzlypbk10ivk
	I1025 09:32:55.427024  326776 out.go:252]   - Configuring RBAC rules ...
	I1025 09:32:55.427147  326776 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:32:55.427271  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:32:55.427444  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:32:55.427613  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:32:55.427752  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:32:55.427870  326776 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:32:55.428017  326776 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:32:55.428095  326776 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:32:55.428169  326776 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:32:55.428178  326776 kubeadm.go:318] 
	I1025 09:32:55.428244  326776 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:32:55.428251  326776 kubeadm.go:318] 
	I1025 09:32:55.428360  326776 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:32:55.428379  326776 kubeadm.go:318] 
	I1025 09:32:55.428409  326776 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:32:55.428474  326776 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:32:55.428524  326776 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:32:55.428530  326776 kubeadm.go:318] 
	I1025 09:32:55.428583  326776 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:32:55.428590  326776 kubeadm.go:318] 
	I1025 09:32:55.428633  326776 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:32:55.428639  326776 kubeadm.go:318] 
	I1025 09:32:55.428688  326776 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:32:55.428754  326776 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:32:55.428821  326776 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:32:55.428826  326776 kubeadm.go:318] 
	I1025 09:32:55.428909  326776 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:32:55.429013  326776 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:32:55.429028  326776 kubeadm.go:318] 
	I1025 09:32:55.429140  326776 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token i5mo7j.cxciqzlypbk10ivk \
	I1025 09:32:55.429266  326776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 09:32:55.429313  326776 kubeadm.go:318] 	--control-plane 
	I1025 09:32:55.429337  326776 kubeadm.go:318] 
	I1025 09:32:55.429444  326776 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:32:55.429453  326776 kubeadm.go:318] 
	I1025 09:32:55.429565  326776 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token i5mo7j.cxciqzlypbk10ivk \
	I1025 09:32:55.429756  326776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 09:32:55.429780  326776 cni.go:84] Creating CNI manager for ""
	I1025 09:32:55.429787  326776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:55.431084  326776 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:32:55.432235  326776 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:32:55.437050  326776 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:32:55.437068  326776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:32:55.450959  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:32:55.661119  326776 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:32:55.661217  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:55.661245  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-582494 minikube.k8s.io/updated_at=2025_10_25T09_32_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-582494 minikube.k8s.io/primary=true
	I1025 09:32:55.672958  326776 ops.go:34] apiserver oom_adj: -16
	I1025 09:32:55.740719  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:56.240913  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:56.741511  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:57.241193  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:57.740825  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:58.241159  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:58.741396  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:59.241071  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:59.741394  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:00.241553  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:00.320295  326776 kubeadm.go:1113] duration metric: took 4.659153612s to wait for elevateKubeSystemPrivileges
	I1025 09:33:00.320360  326776 kubeadm.go:402] duration metric: took 15.086941359s to StartCluster
	I1025 09:33:00.320385  326776 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:00.321202  326776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:33:00.321678  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:00.321872  326776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:33:00.321909  326776 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:33:00.321987  326776 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:33:00.322115  326776 addons.go:69] Setting yakd=true in profile "addons-582494"
	I1025 09:33:00.322124  326776 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-582494"
	I1025 09:33:00.322144  326776 addons.go:238] Setting addon yakd=true in "addons-582494"
	I1025 09:33:00.322155  326776 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-582494"
	I1025 09:33:00.322163  326776 addons.go:69] Setting metrics-server=true in profile "addons-582494"
	I1025 09:33:00.322194  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322194  326776 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:00.322206  326776 addons.go:69] Setting registry-creds=true in profile "addons-582494"
	I1025 09:33:00.322207  326776 addons.go:238] Setting addon metrics-server=true in "addons-582494"
	I1025 09:33:00.322218  326776 addons.go:238] Setting addon registry-creds=true in "addons-582494"
	I1025 09:33:00.322229  326776 addons.go:69] Setting cloud-spanner=true in profile "addons-582494"
	I1025 09:33:00.322237  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322243  326776 addons.go:238] Setting addon cloud-spanner=true in "addons-582494"
	I1025 09:33:00.322247  326776 addons.go:69] Setting storage-provisioner=true in profile "addons-582494"
	I1025 09:33:00.322265  326776 addons.go:238] Setting addon storage-provisioner=true in "addons-582494"
	I1025 09:33:00.322275  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322285  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322395  326776 addons.go:69] Setting ingress-dns=true in profile "addons-582494"
	I1025 09:33:00.322414  326776 addons.go:238] Setting addon ingress-dns=true in "addons-582494"
	I1025 09:33:00.322447  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322820  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322832  326776 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-582494"
	I1025 09:33:00.322844  326776 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-582494"
	I1025 09:33:00.322849  326776 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-582494"
	I1025 09:33:00.322873  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322889  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322890  326776 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-582494"
	I1025 09:33:00.322917  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.323107  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.323161  326776 addons.go:69] Setting default-storageclass=true in profile "addons-582494"
	I1025 09:33:00.323186  326776 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-582494"
	I1025 09:33:00.323342  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.323500  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.323972  326776 addons.go:69] Setting ingress=true in profile "addons-582494"
	I1025 09:33:00.323996  326776 addons.go:238] Setting addon ingress=true in "addons-582494"
	I1025 09:33:00.324043  326776 addons.go:69] Setting volumesnapshots=true in profile "addons-582494"
	I1025 09:33:00.324064  326776 addons.go:238] Setting addon volumesnapshots=true in "addons-582494"
	I1025 09:33:00.324095  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322180  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.324298  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.324764  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.325057  326776 addons.go:69] Setting volcano=true in profile "addons-582494"
	I1025 09:33:00.325109  326776 addons.go:238] Setting addon volcano=true in "addons-582494"
	I1025 09:33:00.325155  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.325668  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322189  326776 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-582494"
	I1025 09:33:00.326165  326776 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-582494"
	I1025 09:33:00.326200  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.326346  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.326518  326776 out.go:179] * Verifying Kubernetes components...
	I1025 09:33:00.322832  326776 addons.go:69] Setting inspektor-gadget=true in profile "addons-582494"
	I1025 09:33:00.326705  326776 addons.go:238] Setting addon inspektor-gadget=true in "addons-582494"
	I1025 09:33:00.326739  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322820  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322239  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322209  326776 addons.go:69] Setting gcp-auth=true in profile "addons-582494"
	I1025 09:33:00.327122  326776 mustload.go:65] Loading cluster: addons-582494
	I1025 09:33:00.322820  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322198  326776 addons.go:69] Setting registry=true in profile "addons-582494"
	I1025 09:33:00.327406  326776 addons.go:238] Setting addon registry=true in "addons-582494"
	I1025 09:33:00.327436  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.328655  326776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:00.335771  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.336127  326776 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:00.336890  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.338817  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.337355  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.337783  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.339087  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.363766  326776 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:33:00.363932  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:33:00.363844  326776 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:33:00.365294  326776 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:00.366013  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:33:00.366094  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.365295  326776 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:00.366259  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:33:00.366450  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.367668  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:33:00.368908  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:33:00.370109  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:33:00.371518  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:33:00.372661  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:33:00.375395  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:33:00.376890  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:33:00.378892  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:33:00.378916  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:33:00.379078  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.400992  326776 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-582494"
	I1025 09:33:00.422926  326776 host.go:66] Checking if "addons-582494" exists ...
	W1025 09:33:00.424724  326776 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:33:00.401149  326776 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:33:00.427200  326776 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:33:00.427224  326776 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:33:00.427293  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.401330  326776 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:33:00.428501  326776 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:33:00.421601  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.428822  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.430570  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:33:00.430600  326776 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:33:00.430667  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.433280  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.433468  326776 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:00.433489  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:33:00.433545  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.438341  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:00.442724  326776 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:33:00.442820  326776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:33:00.444524  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:33:00.444628  326776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:00.444640  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:33:00.444712  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.444885  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:33:00.444931  326776 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:33:00.446299  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:33:00.446335  326776 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:33:00.446396  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.446471  326776 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:33:00.446526  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:00.446583  326776 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:33:00.446592  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:33:00.446644  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.446900  326776 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:33:00.447576  326776 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:00.447593  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:33:00.447645  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.447822  326776 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:00.447838  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:33:00.447885  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.449177  326776 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:00.449195  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:33:00.449244  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.452088  326776 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:33:00.453312  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.453746  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:33:00.453762  326776 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:33:00.453824  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.490852  326776 addons.go:238] Setting addon default-storageclass=true in "addons-582494"
	I1025 09:33:00.493434  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.494585  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.512082  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.512244  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.513031  326776 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:33:00.513795  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.515141  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.519683  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.522482  326776 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:33:00.524838  326776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:00.524902  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:33:00.524998  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.527857  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.536517  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.536531  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.552218  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.557158  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	W1025 09:33:00.558819  326776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:00.558917  326776 retry.go:31] will retry after 313.093398ms: ssh: handshake failed: EOF
	I1025 09:33:00.563586  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	W1025 09:33:00.564761  326776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:00.564837  326776 retry.go:31] will retry after 264.747724ms: ssh: handshake failed: EOF
	I1025 09:33:00.566459  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.567493  326776 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:00.567515  326776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:33:00.567573  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.591373  326776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:33:00.591511  326776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:00.602794  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.626541  326776 node_ready.go:35] waiting up to 6m0s for node "addons-582494" to be "Ready" ...
	I1025 09:33:00.678355  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:33:00.678449  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:33:00.678457  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:00.679483  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:00.704832  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:00.726152  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:33:00.726254  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:33:00.731722  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:33:00.731798  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:33:00.743252  326776 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:33:00.743278  326776 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:33:00.753260  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:33:00.753288  326776 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:33:00.755191  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:00.768782  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:00.768920  326776 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:33:00.768944  326776 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:33:00.770932  326776 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:00.771001  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:33:00.772988  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:33:00.773010  326776 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:33:00.774371  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:00.775066  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:33:00.775087  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:33:00.782253  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:00.798641  326776 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:00.798666  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:33:00.804442  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:33:00.804470  326776 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:33:00.823109  326776 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:33:00.823140  326776 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:33:00.823197  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:00.826939  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:33:00.826964  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:33:00.852122  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:00.852152  326776 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:33:00.867996  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:00.873758  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:33:00.873874  326776 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:33:00.877847  326776 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:33:00.877926  326776 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:33:00.884801  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:33:00.884889  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:33:00.930195  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:00.942304  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:00.942339  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:33:00.960508  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:33:00.960596  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:33:00.982251  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:33:00.982358  326776 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:33:00.984734  326776 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:33:01.024941  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:33:01.024967  326776 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:33:01.044493  326776 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:01.044590  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:33:01.060540  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:01.100374  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:01.110833  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:01.112478  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:01.119444  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:33:01.119470  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:33:01.173195  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:33:01.173224  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:33:01.248851  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:33:01.248888  326776 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:33:01.313514  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:33:01.492302  326776 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-582494" context rescaled to 1 replicas
	I1025 09:33:02.138773  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.460184606s)
	I1025 09:33:02.138822  326776 addons.go:479] Verifying addon ingress=true in "addons-582494"
	I1025 09:33:02.138873  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.459359805s)
	I1025 09:33:02.138978  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.434059505s)
	I1025 09:33:02.139048  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.383778129s)
	I1025 09:33:02.139103  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.370295313s)
	I1025 09:33:02.139157  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.36475525s)
	I1025 09:33:02.139187  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.356910223s)
	I1025 09:33:02.139303  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.316073804s)
	I1025 09:33:02.139374  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.271280895s)
	W1025 09:33:02.139387  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:02.139397  326776 addons.go:479] Verifying addon registry=true in "addons-582494"
	I1025 09:33:02.139407  326776 retry.go:31] will retry after 321.319405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:02.139488  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.20926408s)
	I1025 09:33:02.139618  326776 addons.go:479] Verifying addon metrics-server=true in "addons-582494"
	I1025 09:33:02.139539  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.078972412s)
	I1025 09:33:02.141065  326776 out.go:179] * Verifying ingress addon...
	I1025 09:33:02.141987  326776 out.go:179] * Verifying registry addon...
	I1025 09:33:02.142219  326776 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-582494 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:33:02.143639  326776 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:33:02.144355  326776 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:33:02.147190  326776 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:33:02.147275  326776 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:33:02.147295  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:02.461267  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:33:02.636272  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:02.654037  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:02.654277  326776 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:33:02.654304  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:02.695148  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.594719436s)
	W1025 09:33:02.695215  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:02.695238  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.584366099s)
	I1025 09:33:02.695260  326776 retry.go:31] will retry after 237.720522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:02.695338  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.582813986s)
	I1025 09:33:02.695572  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.382016993s)
	I1025 09:33:02.695599  326776 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-582494"
	I1025 09:33:02.697768  326776 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:33:02.702835  326776 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:33:02.708385  326776 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:33:02.708412  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:02.933915  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1025 09:33:03.125250  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:03.125290  326776 retry.go:31] will retry after 533.24161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:03.147691  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:03.147758  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:03.206284  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:03.647468  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:03.647824  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:03.658755  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:03.749001  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:04.147584  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:04.147777  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:04.206931  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:04.647380  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:04.647496  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:04.705896  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:05.130533  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:05.147893  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:05.148120  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:05.206132  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:05.455975  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.522005459s)
	I1025 09:33:05.456094  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.797303505s)
	W1025 09:33:05.456138  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:05.456165  326776 retry.go:31] will retry after 313.94334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:05.647064  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:05.647091  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:05.747878  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:05.770938  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:06.147281  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:06.147366  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:06.206608  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:06.334201  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:06.334246  326776 retry.go:31] will retry after 771.808246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:06.647595  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:06.647780  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:06.707168  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:07.106689  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:07.148035  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:07.148188  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:07.206277  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:07.629701  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:07.647390  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:07.647492  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:33:07.668986  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:07.669022  326776 retry.go:31] will retry after 1.487519533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:07.748832  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:08.042596  326776 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:33:08.042665  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:08.061726  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:08.147491  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:08.147542  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:08.174526  326776 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:33:08.188957  326776 addons.go:238] Setting addon gcp-auth=true in "addons-582494"
	I1025 09:33:08.189029  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:08.189447  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:08.206910  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:08.208467  326776 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:33:08.208522  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:08.227439  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:08.326243  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:08.328064  326776 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:33:08.329605  326776 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:33:08.329628  326776 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:33:08.344125  326776 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:33:08.344149  326776 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:33:08.358092  326776 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:08.358121  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:33:08.372249  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:08.647762  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:08.647828  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:08.693418  326776 addons.go:479] Verifying addon gcp-auth=true in "addons-582494"
	I1025 09:33:08.694941  326776 out.go:179] * Verifying gcp-auth addon...
	I1025 09:33:08.696928  326776 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:33:08.748307  326776 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:33:08.748353  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:08.748329  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:09.147625  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:09.147800  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:09.156963  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:09.201041  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:09.205807  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:09.630219  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:09.647441  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:09.647615  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:09.700780  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:09.705742  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:09.732987  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:09.733026  326776 retry.go:31] will retry after 1.844626677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:10.148000  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:10.148018  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:10.200906  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:10.206624  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:10.647251  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:10.647518  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:10.700337  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:10.706089  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:11.147745  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:11.147744  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:11.200596  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:11.206485  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:11.577833  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:33:11.630399  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:11.647766  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:11.647825  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:11.701210  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:11.705978  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:12.130396  326776 node_ready.go:49] node "addons-582494" is "Ready"
	I1025 09:33:12.130436  326776 node_ready.go:38] duration metric: took 11.503342705s for node "addons-582494" to be "Ready" ...
	I1025 09:33:12.130457  326776 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:33:12.130523  326776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:33:12.150050  326776 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:33:12.150075  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:12.150618  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:12.251013  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:12.251081  326776 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:33:12.251093  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:12.288199  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:12.288249  326776 retry.go:31] will retry after 2.984999898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:12.288476  326776 api_server.go:72] duration metric: took 11.966523121s to wait for apiserver process to appear ...
	I1025 09:33:12.288500  326776 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:33:12.288525  326776 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:33:12.295017  326776 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:33:12.296371  326776 api_server.go:141] control plane version: v1.34.1
	I1025 09:33:12.296461  326776 api_server.go:131] duration metric: took 7.891368ms to wait for apiserver health ...
	I1025 09:33:12.296495  326776 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:33:12.352756  326776 system_pods.go:59] 20 kube-system pods found
	I1025 09:33:12.352888  326776 system_pods.go:61] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.352906  326776 system_pods.go:61] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:12.352917  326776 system_pods.go:61] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.352941  326776 system_pods.go:61] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.352950  326776 system_pods.go:61] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.352955  326776 system_pods.go:61] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.352961  326776 system_pods.go:61] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.352965  326776 system_pods.go:61] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.352970  326776 system_pods.go:61] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.352978  326776 system_pods.go:61] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.352984  326776 system_pods.go:61] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.352989  326776 system_pods.go:61] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.352996  326776 system_pods.go:61] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.353004  326776 system_pods.go:61] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.353011  326776 system_pods.go:61] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.353018  326776 system_pods.go:61] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.353026  326776 system_pods.go:61] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.353037  326776 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.353044  326776 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.353051  326776 system_pods.go:61] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:12.353060  326776 system_pods.go:74] duration metric: took 56.557245ms to wait for pod list to return data ...
	I1025 09:33:12.353074  326776 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:33:12.355665  326776 default_sa.go:45] found service account: "default"
	I1025 09:33:12.355694  326776 default_sa.go:55] duration metric: took 2.613422ms for default service account to be created ...
	I1025 09:33:12.355707  326776 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:33:12.453439  326776 system_pods.go:86] 20 kube-system pods found
	I1025 09:33:12.453483  326776 system_pods.go:89] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.453497  326776 system_pods.go:89] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:12.453509  326776 system_pods.go:89] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.453519  326776 system_pods.go:89] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.453528  326776 system_pods.go:89] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.453535  326776 system_pods.go:89] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.453544  326776 system_pods.go:89] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.453555  326776 system_pods.go:89] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.453577  326776 system_pods.go:89] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.453595  326776 system_pods.go:89] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.453604  326776 system_pods.go:89] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.453616  326776 system_pods.go:89] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.453625  326776 system_pods.go:89] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.453634  326776 system_pods.go:89] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.453643  326776 system_pods.go:89] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.453653  326776 system_pods.go:89] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.453666  326776 system_pods.go:89] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.453680  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.453693  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.453706  326776 system_pods.go:89] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:12.453733  326776 retry.go:31] will retry after 224.907087ms: missing components: kube-dns
	I1025 09:33:12.647801  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:12.647856  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:12.683027  326776 system_pods.go:86] 20 kube-system pods found
	I1025 09:33:12.683065  326776 system_pods.go:89] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.683075  326776 system_pods.go:89] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:12.683086  326776 system_pods.go:89] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.683094  326776 system_pods.go:89] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.683102  326776 system_pods.go:89] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.683108  326776 system_pods.go:89] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.683115  326776 system_pods.go:89] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.683122  326776 system_pods.go:89] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.683128  326776 system_pods.go:89] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.683136  326776 system_pods.go:89] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.683145  326776 system_pods.go:89] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.683152  326776 system_pods.go:89] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.683160  326776 system_pods.go:89] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.683170  326776 system_pods.go:89] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.683180  326776 system_pods.go:89] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.683189  326776 system_pods.go:89] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.683200  326776 system_pods.go:89] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.683212  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.683225  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.683233  326776 system_pods.go:89] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:12.683256  326776 retry.go:31] will retry after 240.596808ms: missing components: kube-dns
	I1025 09:33:12.700625  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:12.706728  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:12.930437  326776 system_pods.go:86] 20 kube-system pods found
	I1025 09:33:12.930476  326776 system_pods.go:89] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.930484  326776 system_pods.go:89] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Running
	I1025 09:33:12.930495  326776 system_pods.go:89] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.930503  326776 system_pods.go:89] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.930512  326776 system_pods.go:89] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.930519  326776 system_pods.go:89] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.930534  326776 system_pods.go:89] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.930544  326776 system_pods.go:89] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.930559  326776 system_pods.go:89] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.930575  326776 system_pods.go:89] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.930585  326776 system_pods.go:89] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.930591  326776 system_pods.go:89] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.930602  326776 system_pods.go:89] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.930611  326776 system_pods.go:89] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.930622  326776 system_pods.go:89] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.930634  326776 system_pods.go:89] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.930642  326776 system_pods.go:89] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.930649  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.930661  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.930666  326776 system_pods.go:89] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Running
	I1025 09:33:12.930689  326776 system_pods.go:126] duration metric: took 574.973901ms to wait for k8s-apps to be running ...
	I1025 09:33:12.930702  326776 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:33:12.930766  326776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:33:12.949614  326776 system_svc.go:56] duration metric: took 18.896651ms WaitForService to wait for kubelet
	I1025 09:33:12.949655  326776 kubeadm.go:586] duration metric: took 12.627712487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:33:12.949683  326776 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:33:12.953383  326776 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:33:12.953418  326776 node_conditions.go:123] node cpu capacity is 8
	I1025 09:33:12.953434  326776 node_conditions.go:105] duration metric: took 3.744419ms to run NodePressure ...
	I1025 09:33:12.953451  326776 start.go:241] waiting for startup goroutines ...
	I1025 09:33:13.148185  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:13.148576  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:13.201056  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:13.206526  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:13.647640  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:13.647674  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:13.701152  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:13.706779  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:14.149201  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:14.149251  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:14.200696  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:14.207178  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:14.647865  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:14.648021  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:14.701140  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:14.706060  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:15.148443  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:15.148496  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:15.200692  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:15.206510  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:15.273426  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:15.647947  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:15.648017  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:15.700770  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:15.707483  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:15.975859  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:15.975896  326776 retry.go:31] will retry after 4.599527408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:16.147453  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:16.147637  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:16.200562  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:16.206446  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:16.648710  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:16.648737  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:16.701161  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:16.706659  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:17.148127  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:17.148243  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:17.202078  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:17.206851  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:17.647307  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:17.647480  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:17.700232  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:17.706178  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:18.148264  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:18.148269  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:18.249065  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:18.249184  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:18.647959  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:18.648192  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:18.701293  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:18.706583  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:19.148030  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:19.148164  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:19.200656  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:19.206741  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:19.647503  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:19.647608  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:19.701577  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:19.706704  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:20.148282  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:20.148431  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:20.200191  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:20.206165  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:20.575689  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:20.648372  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:20.648409  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:20.700343  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:20.706303  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:21.143200  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:21.143239  326776 retry.go:31] will retry after 5.115419773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:21.147367  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:21.147818  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:21.200116  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:21.205847  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:21.647655  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:21.647696  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:21.701069  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:21.707504  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:22.148670  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:22.148734  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:22.200762  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:22.207005  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:22.649497  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:22.649775  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:22.701475  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:22.706787  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:23.147427  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:23.147651  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:23.201369  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:23.206459  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:23.647302  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:23.647504  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:23.700760  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:23.706827  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:24.147768  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:24.147778  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:24.200818  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:24.206818  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:24.649607  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:24.652117  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:24.700497  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:24.709215  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:25.149308  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:25.149654  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:25.200960  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:25.207718  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:25.647841  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:25.648097  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:25.700491  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:25.706779  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:26.147253  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:26.147692  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:26.201288  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:26.207026  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:26.259098  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:26.647846  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:26.648000  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:26.700227  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:26.706132  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:26.970750  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:26.970789  326776 retry.go:31] will retry after 8.001289699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:27.148227  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:27.148340  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:27.201263  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:27.206430  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:27.648206  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:27.648539  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:27.700680  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:27.706984  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:28.148225  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:28.150035  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:28.201412  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:28.206820  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:28.647800  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:28.647990  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:28.700970  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:28.706995  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:29.147908  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:29.148030  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:29.201306  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:29.206062  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:29.647543  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:29.647656  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:29.700289  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:29.706706  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:30.147035  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:30.147564  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:30.200804  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:30.206666  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:30.648206  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:30.648246  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:30.700722  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:30.707109  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:31.148116  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:31.148222  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:31.200431  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:31.206646  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:31.647444  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:31.648138  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:31.700541  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:31.707269  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:32.148070  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:32.148106  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:32.200379  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:32.206884  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:32.647561  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:32.647613  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:32.701238  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:32.706476  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:33.148057  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:33.148131  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:33.201093  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:33.206126  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:33.689158  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:33.689259  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:33.699750  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:33.706689  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:34.147508  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:34.147566  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:34.247741  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:34.248062  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:34.647803  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:34.648011  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:34.700693  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:34.706252  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:34.972439  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:35.148514  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:35.148843  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:35.200209  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:35.206644  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:35.649907  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:35.651710  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:35.704376  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:35.710341  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:35.805688  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:35.805786  326776 retry.go:31] will retry after 18.678082557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:36.148179  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:36.148180  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:36.200795  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:36.207091  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:36.649247  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:36.649486  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:36.700892  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:36.706513  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:37.148703  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:37.148908  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:37.201104  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:37.206504  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:37.647423  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:37.647530  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:37.700901  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:37.707293  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:38.148120  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.148199  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.200441  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:38.206973  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:38.648168  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.648365  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.700710  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:38.706934  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.147439  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.148048  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.201731  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:39.206954  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.647474  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.647518  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.700589  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:39.706842  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.147597  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.147805  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.201483  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:40.207134  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.648145  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.648478  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.700145  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:40.706472  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:41.147869  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.148259  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.200762  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:41.206910  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:41.647339  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.647766  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.700854  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:41.707261  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:42.147958  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.148005  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.200785  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.206963  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:42.647671  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.647765  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.700782  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.706930  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:43.147833  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.148313  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.200207  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.206590  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:43.647832  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.647929  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.701125  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.705967  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:44.147626  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.147808  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.200612  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.206571  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:44.647714  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.647874  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.700771  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.706459  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:45.148626  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.148740  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.201269  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.206998  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:45.648198  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.648430  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.700505  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.706655  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.147642  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.147645  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.200672  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.206593  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.647807  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.647849  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.701349  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.706760  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:47.147870  326776 kapi.go:107] duration metric: took 45.003509589s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:33:47.148041  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.245668  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.246558  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:47.647867  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.700774  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.707173  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.148676  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.201026  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.206944  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.647901  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.727073  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.727343  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.148563  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:49.200436  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.206372  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.648266  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:49.710231  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.710982  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:50.147266  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.200742  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.206967  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:50.648437  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.700983  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.707347  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.147858  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.201116  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.206667  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.647144  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.702066  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.706041  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:52.148109  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.200963  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.207508  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:52.647556  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.700560  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.706825  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:53.149235  326776 kapi.go:107] duration metric: took 51.005594228s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:33:53.200997  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.207374  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:53.701640  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.706536  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:54.201103  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.205921  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:54.484177  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:54.701663  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.707079  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.200178  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:33:55.202487  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:55.202520  326776 retry.go:31] will retry after 21.963178346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:55.206889  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.700374  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:55.706491  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:56.201092  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.205520  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:56.701017  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.705817  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:57.200559  326776 kapi.go:107] duration metric: took 48.503627728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:33:57.202487  326776 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-582494 cluster.
	I1025 09:33:57.203982  326776 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:33:57.205425  326776 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:33:57.206388  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:57.706972  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.207442  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.707540  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:59.207379  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:59.706573  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:00.207793  326776 kapi.go:107] duration metric: took 57.504955185s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:34:17.168030  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:34:17.732021  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:17.732050  326776 retry.go:31] will retry after 29.095006215s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:46.828844  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:34:47.397373  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:34:47.397522  326776 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:34:47.402000  326776 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, registry-creds, cloud-spanner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 09:34:47.403393  326776 addons.go:514] duration metric: took 1m47.08141246s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin registry-creds cloud-spanner nvidia-device-plugin metrics-server yakd default-storageclass storage-provisioner storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 09:34:47.403458  326776 start.go:246] waiting for cluster config update ...
	I1025 09:34:47.403481  326776 start.go:255] writing updated cluster config ...
	I1025 09:34:47.403801  326776 ssh_runner.go:195] Run: rm -f paused
	I1025 09:34:47.408274  326776 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:47.412416  326776 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x52sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.417036  326776 pod_ready.go:94] pod "coredns-66bc5c9577-x52sm" is "Ready"
	I1025 09:34:47.417067  326776 pod_ready.go:86] duration metric: took 4.625059ms for pod "coredns-66bc5c9577-x52sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.419174  326776 pod_ready.go:83] waiting for pod "etcd-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.423232  326776 pod_ready.go:94] pod "etcd-addons-582494" is "Ready"
	I1025 09:34:47.423254  326776 pod_ready.go:86] duration metric: took 4.057225ms for pod "etcd-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.425289  326776 pod_ready.go:83] waiting for pod "kube-apiserver-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.429061  326776 pod_ready.go:94] pod "kube-apiserver-addons-582494" is "Ready"
	I1025 09:34:47.429083  326776 pod_ready.go:86] duration metric: took 3.772431ms for pod "kube-apiserver-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.430941  326776 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.813640  326776 pod_ready.go:94] pod "kube-controller-manager-addons-582494" is "Ready"
	I1025 09:34:47.813671  326776 pod_ready.go:86] duration metric: took 382.708184ms for pod "kube-controller-manager-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:48.013188  326776 pod_ready.go:83] waiting for pod "kube-proxy-fmsgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:48.413228  326776 pod_ready.go:94] pod "kube-proxy-fmsgh" is "Ready"
	I1025 09:34:48.413257  326776 pod_ready.go:86] duration metric: took 400.043463ms for pod "kube-proxy-fmsgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:48.612735  326776 pod_ready.go:83] waiting for pod "kube-scheduler-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:49.012835  326776 pod_ready.go:94] pod "kube-scheduler-addons-582494" is "Ready"
	I1025 09:34:49.012862  326776 pod_ready.go:86] duration metric: took 400.092842ms for pod "kube-scheduler-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:49.012873  326776 pod_ready.go:40] duration metric: took 1.604563144s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:49.061617  326776 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:34:49.063460  326776 out.go:179] * Done! kubectl is now configured to use "addons-582494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:35:54 addons-582494 crio[771]: time="2025-10-25T09:35:54.751933788Z" level=info msg="Removing pod sandbox: 79745917bc210b91284e9b2aac5f29f138e96a5d269601dabdd8dc7a7a54d229" id=29b54be9-e62f-4890-a962-82744104cfc9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:35:54 addons-582494 crio[771]: time="2025-10-25T09:35:54.755284911Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:35:54 addons-582494 crio[771]: time="2025-10-25T09:35:54.755525693Z" level=info msg="Removed pod sandbox: 79745917bc210b91284e9b2aac5f29f138e96a5d269601dabdd8dc7a7a54d229" id=29b54be9-e62f-4890-a962-82744104cfc9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.833292125Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-hnrd7/POD" id=447023b3-ee14-41b6-8cc1-1e4bd5557d4d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.833424145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.841049716Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hnrd7 Namespace:default ID:618e1ddd4ddda1ff2a23b803f28b57fb8681455ba9a3ef42823c8c8eac3453fb UID:5267b033-da5b-4c2a-a1a1-58bb077b7b69 NetNS:/var/run/netns/1974eb87-f813-46a5-96fe-17cbfe05b111 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000129098}] Aliases:map[]}"
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.841087488Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-hnrd7 to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.85221024Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hnrd7 Namespace:default ID:618e1ddd4ddda1ff2a23b803f28b57fb8681455ba9a3ef42823c8c8eac3453fb UID:5267b033-da5b-4c2a-a1a1-58bb077b7b69 NetNS:/var/run/netns/1974eb87-f813-46a5-96fe-17cbfe05b111 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000129098}] Aliases:map[]}"
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.852381222Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-hnrd7 for CNI network kindnet (type=ptp)"
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.853478834Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.855309101Z" level=info msg="Ran pod sandbox 618e1ddd4ddda1ff2a23b803f28b57fb8681455ba9a3ef42823c8c8eac3453fb with infra container: default/hello-world-app-5d498dc89-hnrd7/POD" id=447023b3-ee14-41b6-8cc1-1e4bd5557d4d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.85779291Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2e7131cb-ed12-4e34-b78f-63b6e276dbce name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.857956578Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=2e7131cb-ed12-4e34-b78f-63b6e276dbce name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.858001742Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=2e7131cb-ed12-4e34-b78f-63b6e276dbce name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.858839224Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=2c6ab7a1-b5c8-4953-b066-3bea4d612517 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:37:37 addons-582494 crio[771]: time="2025-10-25T09:37:37.863184142Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 25 09:37:38 addons-582494 crio[771]: time="2025-10-25T09:37:38.986691416Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=2c6ab7a1-b5c8-4953-b066-3bea4d612517 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:37:38 addons-582494 crio[771]: time="2025-10-25T09:37:38.987467259Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f8e649dd-5ee1-4c98-82f7-95bb527bec93 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:38 addons-582494 crio[771]: time="2025-10-25T09:37:38.989217959Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=dd468e3a-a779-4889-88dc-7de519ad7e94 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:38 addons-582494 crio[771]: time="2025-10-25T09:37:38.993414349Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-hnrd7/hello-world-app" id=e4438f34-15ad-4dd1-bfd3-59e575f266d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:37:38 addons-582494 crio[771]: time="2025-10-25T09:37:38.993562914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:39 addons-582494 crio[771]: time="2025-10-25T09:37:39.003926716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:39 addons-582494 crio[771]: time="2025-10-25T09:37:39.004172184Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b00876985e5a71b46e0c3697832e7ab3dc5296c13177c7515695fd5f6847e85b/merged/etc/passwd: no such file or directory"
	Oct 25 09:37:39 addons-582494 crio[771]: time="2025-10-25T09:37:39.00421595Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b00876985e5a71b46e0c3697832e7ab3dc5296c13177c7515695fd5f6847e85b/merged/etc/group: no such file or directory"
	Oct 25 09:37:39 addons-582494 crio[771]: time="2025-10-25T09:37:39.004551084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	541faf7abb7bc       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   618e1ddd4ddda       hello-world-app-5d498dc89-hnrd7             default
	9133b79deac29       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   344bfc5c706e3       registry-creds-764b6fb674-n9dsg             kube-system
	278758d812153       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   6438c9e01e1fd       nginx                                       default
	1ee1af5d7cbcc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   cac9993f4feac       busybox                                     default
	a590641d19544       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	14680175d4318       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	8b3ea24513b9d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	1c33d20dccf9d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	5f498c8f7524b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   b54023d98bc99       gcp-auth-78565c9fb4-fbgsp                   gcp-auth
	19e3e274001e7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	30c87a2348b53       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   910abc08088b2       ingress-nginx-controller-675c5ddd98-99ltz   ingress-nginx
	c77d73d1bd9c3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   49fcaf6cd8f16       gadget-mhs6l                                gadget
	fd4a5a7d8c5f4       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   45a541774711d       registry-proxy-vjtwb                        kube-system
	aaacd09fa43cb       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   7ebd6ddef915d       amd-gpu-device-plugin-j28pq                 kube-system
	5f1abc3fa71fd       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   3c84383feb961       nvidia-device-plugin-daemonset-wln7g        kube-system
	ba8a2ae228e5a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago            Running             volume-snapshot-controller               0                   e8787f64bb07c       snapshot-controller-7d9fbc56b8-kww9w        kube-system
	b2e5cedb9fdb4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   4 minutes ago            Running             csi-external-health-monitor-controller   0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	53959ea9bc3e2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago            Running             csi-attacher                             0                   39f75f3f82bb0       csi-hostpath-attacher-0                     kube-system
	214643b0e233a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              4 minutes ago            Running             csi-resizer                              0                   98773678657d3       csi-hostpath-resizer-0                      kube-system
	1b757b91d048a       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             4 minutes ago            Exited              patch                                    2                   d1aa5e992c5bc       ingress-nginx-admission-patch-l8h7x         ingress-nginx
	1bf763269e9c6       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           4 minutes ago            Running             registry                                 0                   f22742830a23f       registry-6b586f9694-jftz9                   kube-system
	e59c39fff2eab       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago            Running             minikube-ingress-dns                     0                   fb4ec008b0c49       kube-ingress-dns-minikube                   kube-system
	da4dbc32e1215       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              4 minutes ago            Running             yakd                                     0                   0a668596eb605       yakd-dashboard-5ff678cb9-bjt42              yakd-dashboard
	d884ae3f8ba28       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               4 minutes ago            Running             cloud-spanner-emulator                   0                   4eab3c3d495c0       cloud-spanner-emulator-86bd5cbb97-7f4kh     default
	1712ecb1d7d91       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   4 minutes ago            Exited              create                                   0                   9b6ee7a455638       ingress-nginx-admission-create-jk78g        ingress-nginx
	42f1c21ebcd71       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago            Running             volume-snapshot-controller               0                   c4d9203de5ce2       snapshot-controller-7d9fbc56b8-b7qwq        kube-system
	08e112895de76       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago            Running             local-path-provisioner                   0                   bc8fbd25e7ea4       local-path-provisioner-648f6765c9-sdhd9     local-path-storage
	eb8b6e448a834       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        4 minutes ago            Running             metrics-server                           0                   ff9d7c2e6014b       metrics-server-85b7d694d7-wnq6w             kube-system
	d0697f1703581       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago            Running             coredns                                  0                   b4e31d2cb7d28       coredns-66bc5c9577-x52sm                    kube-system
	b927c0ae13deb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago            Running             storage-provisioner                      0                   fa266bb95ca83       storage-provisioner                         kube-system
	29d8b7fdf8c84       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   d47b121ec8a23       kube-proxy-fmsgh                            kube-system
	d0a9822bc2dd8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   f90cf1faeec9f       kindnet-dkqbp                               kube-system
	19a44bd56404c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   336012917a173       etcd-addons-582494                          kube-system
	9b9bb34ede66b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   22c6717fe0966       kube-apiserver-addons-582494                kube-system
	d0ccb48b50e7a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   a3e1459f47202       kube-controller-manager-addons-582494       kube-system
	62e249a5b3adf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   ccb17b5db09e5       kube-scheduler-addons-582494                kube-system
	
	
	==> coredns [d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242] <==
	[INFO] 10.244.0.22:60817 - 46199 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00582152s
	[INFO] 10.244.0.22:42767 - 10907 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004663784s
	[INFO] 10.244.0.22:44258 - 39572 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005061424s
	[INFO] 10.244.0.22:33819 - 51161 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004034718s
	[INFO] 10.244.0.22:38876 - 33170 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004378859s
	[INFO] 10.244.0.22:36523 - 52445 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000930965s
	[INFO] 10.244.0.22:40205 - 21697 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001233336s
	[INFO] 10.244.0.26:58728 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000219949s
	[INFO] 10.244.0.26:42227 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000228388s
	[INFO] 10.244.0.30:38988 - 55851 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00021543s
	[INFO] 10.244.0.30:46299 - 60787 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000262949s
	[INFO] 10.244.0.30:58109 - 1547 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.0001621s
	[INFO] 10.244.0.30:57585 - 33017 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000165393s
	[INFO] 10.244.0.30:47910 - 32333 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000102923s
	[INFO] 10.244.0.30:60669 - 40010 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000125775s
	[INFO] 10.244.0.30:59634 - 61485 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00345482s
	[INFO] 10.244.0.30:49059 - 28078 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003497413s
	[INFO] 10.244.0.30:41079 - 37602 "A IN accounts.google.com.us-west1-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00474509s
	[INFO] 10.244.0.30:39909 - 19346 "AAAA IN accounts.google.com.us-west1-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00480247s
	[INFO] 10.244.0.30:40465 - 47403 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004599878s
	[INFO] 10.244.0.30:54120 - 13013 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.007878992s
	[INFO] 10.244.0.30:35036 - 15944 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004720326s
	[INFO] 10.244.0.30:57907 - 15686 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004761404s
	[INFO] 10.244.0.30:59116 - 27541 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001846603s
	[INFO] 10.244.0.30:36124 - 38985 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001885048s
	
	
	==> describe nodes <==
	Name:               addons-582494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-582494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=addons-582494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_32_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-582494
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-582494"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:32:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-582494
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:37:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:37:31 +0000   Sat, 25 Oct 2025 09:32:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:37:31 +0000   Sat, 25 Oct 2025 09:32:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:37:31 +0000   Sat, 25 Oct 2025 09:32:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:37:31 +0000   Sat, 25 Oct 2025 09:33:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-582494
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                210d01d7-a029-4efc-9521-d1eac2e4328a
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     cloud-spanner-emulator-86bd5cbb97-7f4kh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  default                     hello-world-app-5d498dc89-hnrd7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-mhs6l                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  gcp-auth                    gcp-auth-78565c9fb4-fbgsp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-99ltz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m37s
	  kube-system                 amd-gpu-device-plugin-j28pq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 coredns-66bc5c9577-x52sm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m39s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpathplugin-s5v6k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 etcd-addons-582494                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m45s
	  kube-system                 kindnet-dkqbp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m39s
	  kube-system                 kube-apiserver-addons-582494                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-582494        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-proxy-fmsgh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-scheduler-addons-582494                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-85b7d694d7-wnq6w              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m38s
	  kube-system                 nvidia-device-plugin-daemonset-wln7g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 registry-6b586f9694-jftz9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 registry-creds-764b6fb674-n9dsg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 registry-proxy-vjtwb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-b7qwq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-kww9w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  local-path-storage          local-path-provisioner-648f6765c9-sdhd9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bjt42               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m37s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-582494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-582494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-582494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m40s  node-controller  Node addons-582494 event: Registered Node addons-582494 in Controller
	  Normal  NodeReady                4m28s  kubelet          Node addons-582494 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 16 b3 d7 05 74 b5 08 06
	[ +20.912051] IPv4: martian source 10.244.0.1 from 10.244.0.53, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e b0 a7 e4 38 e4 08 06
	[Oct25 09:35] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.057046] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023954] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023909] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023917] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +4.031795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +8.447358] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[ +16.382923] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 09:36] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	
	
	==> etcd [19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f] <==
	{"level":"warn","ts":"2025-10-25T09:32:51.673065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.679681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.687079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.701801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.710610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.718533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.726077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.733236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.740165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.747095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.753840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.760771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.773958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.780633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.787198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.793571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.812375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.819706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.826626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.876608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:03.189782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:03.196604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:16.965600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:16.972311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:16.988139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [5f498c8f7524b00b664dd08f3a1f0f60a2b8ef24a467414ad638ba00176c1305] <==
	2025/10/25 09:33:56 GCP Auth Webhook started!
	2025/10/25 09:34:49 Ready to marshal response ...
	2025/10/25 09:34:49 Ready to write response ...
	2025/10/25 09:34:49 Ready to marshal response ...
	2025/10/25 09:34:49 Ready to write response ...
	2025/10/25 09:34:49 Ready to marshal response ...
	2025/10/25 09:34:49 Ready to write response ...
	2025/10/25 09:35:01 Ready to marshal response ...
	2025/10/25 09:35:01 Ready to write response ...
	2025/10/25 09:35:01 Ready to marshal response ...
	2025/10/25 09:35:01 Ready to write response ...
	2025/10/25 09:35:08 Ready to marshal response ...
	2025/10/25 09:35:08 Ready to write response ...
	2025/10/25 09:35:10 Ready to marshal response ...
	2025/10/25 09:35:10 Ready to write response ...
	2025/10/25 09:35:11 Ready to marshal response ...
	2025/10/25 09:35:11 Ready to write response ...
	2025/10/25 09:35:19 Ready to marshal response ...
	2025/10/25 09:35:19 Ready to write response ...
	2025/10/25 09:35:46 Ready to marshal response ...
	2025/10/25 09:35:46 Ready to write response ...
	2025/10/25 09:37:37 Ready to marshal response ...
	2025/10/25 09:37:37 Ready to write response ...
	
	
	==> kernel <==
	 09:37:39 up  1:20,  0 user,  load average: 1.50, 17.21, 50.32
	Linux addons-582494 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b] <==
	I1025 09:35:31.313955       1 main.go:301] handling current node
	I1025 09:35:41.314042       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:41.314085       1 main.go:301] handling current node
	I1025 09:35:51.313894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:51.313930       1 main.go:301] handling current node
	I1025 09:36:01.314179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:01.314211       1 main.go:301] handling current node
	I1025 09:36:11.314141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:11.314178       1 main.go:301] handling current node
	I1025 09:36:21.314650       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:21.314686       1 main.go:301] handling current node
	I1025 09:36:31.314199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:31.314231       1 main.go:301] handling current node
	I1025 09:36:41.314306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:41.314372       1 main.go:301] handling current node
	I1025 09:36:51.320810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:51.320845       1 main.go:301] handling current node
	I1025 09:37:01.314471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:01.314513       1 main.go:301] handling current node
	I1025 09:37:11.314017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:11.314088       1 main.go:301] handling current node
	I1025 09:37:21.314077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:21.314113       1 main.go:301] handling current node
	I1025 09:37:31.314612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:31.314659       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d] <==
	W1025 09:33:16.285948       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:33:16.285989       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:33:16.285991       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:33:16.286001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 09:33:16.287148       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:33:16.965483       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:16.972293       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:16.988050       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:16.994838       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:20.295508       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:33:20.295660       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:33:20.295847       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.189.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.189.108:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 09:33:20.304632       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:34:57.825771       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43066: use of closed network connection
	E1025 09:34:57.980276       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43084: use of closed network connection
	I1025 09:35:11.439020       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:35:11.635939       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.213.16"}
	I1025 09:35:30.209123       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 09:37:37.595960       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.102.81"}
	
	
	==> kube-controller-manager [d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176] <==
	I1025 09:32:59.275580       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-582494"
	I1025 09:32:59.275652       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:32:59.275683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:32:59.275912       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:32:59.276698       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:32:59.276735       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:32:59.276745       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:32:59.276770       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:32:59.276807       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:32:59.276811       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:32:59.276884       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:32:59.276977       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:32:59.276992       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:32:59.277138       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:32:59.277657       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:32:59.278912       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:32:59.284946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:32:59.299259       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:33:01.745640       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1025 09:33:14.277949       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1025 09:33:29.290742       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:33:29.290799       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:33:29.308948       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:33:29.391055       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:29.409458       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8] <==
	I1025 09:33:00.741126       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:33:01.007897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:33:01.112335       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:33:01.121746       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:33:01.121889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:33:01.423226       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:33:01.423308       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:33:01.500491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:33:01.514915       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:33:01.514969       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:01.600015       1 config.go:200] "Starting service config controller"
	I1025 09:33:01.600045       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:33:01.600075       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:33:01.600081       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:33:01.600138       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:33:01.600147       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:33:01.601048       1 config.go:309] "Starting node config controller"
	I1025 09:33:01.601058       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:33:01.601066       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:33:01.702578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:33:01.704811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:33:01.704838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf] <==
	E1025 09:32:52.294115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:32:52.294125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:32:52.295690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:32:52.295950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:32:52.296068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:32:52.296145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:32:52.296216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:32:52.296250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:32:52.296390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:32:52.296450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:32:52.296491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:32:52.296489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:32:53.157364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:32:53.161529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:32:53.194870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:32:53.274499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:32:53.298034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:32:53.314123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:32:53.501431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:32:53.503481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:32:53.511765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:32:53.563977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:32:53.583229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:32:53.721138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:32:55.783010       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.253962    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g447j\" (UniqueName: \"kubernetes.io/projected/981566de-a847-4cd8-a946-e77b4c548cb2-kube-api-access-g447j\") pod \"981566de-a847-4cd8-a946-e77b4c548cb2\" (UID: \"981566de-a847-4cd8-a946-e77b4c548cb2\") "
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.254033    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/981566de-a847-4cd8-a946-e77b4c548cb2-gcp-creds\") pod \"981566de-a847-4cd8-a946-e77b4c548cb2\" (UID: \"981566de-a847-4cd8-a946-e77b4c548cb2\") "
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.254192    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fc01a62f-b185-11f0-b661-2eeeeb4d652e\") pod \"981566de-a847-4cd8-a946-e77b4c548cb2\" (UID: \"981566de-a847-4cd8-a946-e77b4c548cb2\") "
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.254206    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981566de-a847-4cd8-a946-e77b4c548cb2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "981566de-a847-4cd8-a946-e77b4c548cb2" (UID: "981566de-a847-4cd8-a946-e77b4c548cb2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.254396    1295 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/981566de-a847-4cd8-a946-e77b4c548cb2-gcp-creds\") on node \"addons-582494\" DevicePath \"\""
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.256595    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981566de-a847-4cd8-a946-e77b4c548cb2-kube-api-access-g447j" (OuterVolumeSpecName: "kube-api-access-g447j") pod "981566de-a847-4cd8-a946-e77b4c548cb2" (UID: "981566de-a847-4cd8-a946-e77b4c548cb2"). InnerVolumeSpecName "kube-api-access-g447j". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.257581    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^fc01a62f-b185-11f0-b661-2eeeeb4d652e" (OuterVolumeSpecName: "task-pv-storage") pod "981566de-a847-4cd8-a946-e77b4c548cb2" (UID: "981566de-a847-4cd8-a946-e77b4c548cb2"). InnerVolumeSpecName "pvc-7dc57a88-13af-4246-88a2-a3f2b59e51c1". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.355435    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g447j\" (UniqueName: \"kubernetes.io/projected/981566de-a847-4cd8-a946-e77b4c548cb2-kube-api-access-g447j\") on node \"addons-582494\" DevicePath \"\""
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.355506    1295 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-7dc57a88-13af-4246-88a2-a3f2b59e51c1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fc01a62f-b185-11f0-b661-2eeeeb4d652e\") on node \"addons-582494\" "
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.360556    1295 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-7dc57a88-13af-4246-88a2-a3f2b59e51c1" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^fc01a62f-b185-11f0-b661-2eeeeb4d652e") on node "addons-582494"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.456556    1295 reconciler_common.go:299] "Volume detached for volume \"pvc-7dc57a88-13af-4246-88a2-a3f2b59e51c1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fc01a62f-b185-11f0-b661-2eeeeb4d652e\") on node \"addons-582494\" DevicePath \"\""
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.544596    1295 scope.go:117] "RemoveContainer" containerID="5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.555822    1295 scope.go:117] "RemoveContainer" containerID="5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: E1025 09:35:54.556396    1295 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a\": container with ID starting with 5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a not found: ID does not exist" containerID="5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.556449    1295 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a"} err="failed to get container status \"5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a\": rpc error: code = NotFound desc = could not find container \"5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a\": container with ID starting with 5dec2b9b3e37b82fb87a830b15ab2d292bd15c5f7b67832a2ec1b61611b07b6a not found: ID does not exist"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.659196    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981566de-a847-4cd8-a946-e77b4c548cb2" path="/var/lib/kubelet/pods/981566de-a847-4cd8-a946-e77b4c548cb2/volumes"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.687524    1295 scope.go:117] "RemoveContainer" containerID="22d0c0318a6a5b11d9502dfa505c0dabdbca5847cfdd484ba8f2b6464a9e9bad"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.700126    1295 scope.go:117] "RemoveContainer" containerID="5c32064e9454bae20315d8251917c21b4e62505bcc0477fe9ce40688f1df0fe7"
	Oct 25 09:35:54 addons-582494 kubelet[1295]: I1025 09:35:54.712985    1295 scope.go:117] "RemoveContainer" containerID="efc29e82332e54cdfba12fa5d7d6e9e486c83b522bdcd15c6d1d0b56f74b20e3"
	Oct 25 09:36:13 addons-582494 kubelet[1295]: I1025 09:36:13.656262    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wln7g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:36:20 addons-582494 kubelet[1295]: I1025 09:36:20.656669    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j28pq" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:36:24 addons-582494 kubelet[1295]: I1025 09:36:24.657988    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vjtwb" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:37:32 addons-582494 kubelet[1295]: I1025 09:37:32.656942    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j28pq" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:37:37 addons-582494 kubelet[1295]: I1025 09:37:37.550454    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5267b033-da5b-4c2a-a1a1-58bb077b7b69-gcp-creds\") pod \"hello-world-app-5d498dc89-hnrd7\" (UID: \"5267b033-da5b-4c2a-a1a1-58bb077b7b69\") " pod="default/hello-world-app-5d498dc89-hnrd7"
	Oct 25 09:37:37 addons-582494 kubelet[1295]: I1025 09:37:37.550526    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drpnm\" (UniqueName: \"kubernetes.io/projected/5267b033-da5b-4c2a-a1a1-58bb077b7b69-kube-api-access-drpnm\") pod \"hello-world-app-5d498dc89-hnrd7\" (UID: \"5267b033-da5b-4c2a-a1a1-58bb077b7b69\") " pod="default/hello-world-app-5d498dc89-hnrd7"
	
	
	==> storage-provisioner [b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a] <==
	W1025 09:37:13.487936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:15.491274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:15.495401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:17.498557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:17.504156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:19.507623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:19.511852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:21.515216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:21.519207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:23.523120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:23.528429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:25.532006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:25.536505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:27.539980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:27.544192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:29.547011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:29.552521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:31.555776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:31.560574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:33.564283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:33.568801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:35.572173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:35.577770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:37.581173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:37.585258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-582494 -n addons-582494
helpers_test.go:269: (dbg) Run:  kubectl --context addons-582494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-582494 describe pod ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-582494 describe pod ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x: exit status 1 (58.131742ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jk78g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l8h7x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-582494 describe pod ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (257.181465ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:37:40.212026  341744 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:37:40.212350  341744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:40.212360  341744 out.go:374] Setting ErrFile to fd 2...
	I1025 09:37:40.212363  341744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:40.212587  341744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:37:40.212866  341744 mustload.go:65] Loading cluster: addons-582494
	I1025 09:37:40.213230  341744 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:40.213247  341744 addons.go:606] checking whether the cluster is paused
	I1025 09:37:40.213343  341744 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:40.213357  341744 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:37:40.213781  341744 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:37:40.234416  341744 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:40.234487  341744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:37:40.251878  341744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:37:40.353736  341744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:40.353850  341744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:40.385073  341744 cri.go:89] found id: "9133b79deac292e96b3912c2350995b2b6208a8b3675582722dc16a9cb95cf16"
	I1025 09:37:40.385095  341744 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:37:40.385099  341744 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:37:40.385102  341744 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:37:40.385104  341744 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:37:40.385109  341744 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:37:40.385111  341744 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:37:40.385114  341744 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:37:40.385117  341744 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:37:40.385125  341744 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:37:40.385128  341744 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:37:40.385131  341744 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:37:40.385133  341744 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:37:40.385135  341744 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:37:40.385138  341744 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:37:40.385142  341744 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:37:40.385144  341744 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:37:40.385148  341744 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:37:40.385151  341744 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:37:40.385154  341744 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:37:40.385156  341744 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:37:40.385158  341744 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:37:40.385161  341744 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:37:40.385163  341744 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:37:40.385165  341744 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:37:40.385168  341744 cri.go:89] found id: ""
	I1025 09:37:40.385207  341744 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:37:40.400453  341744 out.go:203] 
	W1025 09:37:40.401810  341744 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:37:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:37:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:37:40.401825  341744 out.go:285] * 
	* 
	W1025 09:37:40.405000  341744 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:37:40.406366  341744 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable ingress --alsologtostderr -v=1: exit status 11 (260.774351ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:37:40.472287  341808 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:37:40.472490  341808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:40.472501  341808 out.go:374] Setting ErrFile to fd 2...
	I1025 09:37:40.472506  341808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:40.472716  341808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:37:40.473068  341808 mustload.go:65] Loading cluster: addons-582494
	I1025 09:37:40.473431  341808 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:40.473449  341808 addons.go:606] checking whether the cluster is paused
	I1025 09:37:40.473534  341808 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:40.473548  341808 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:37:40.473936  341808 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:37:40.492873  341808 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:40.492930  341808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:37:40.512260  341808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:37:40.613465  341808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:40.613561  341808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:40.644953  341808 cri.go:89] found id: "9133b79deac292e96b3912c2350995b2b6208a8b3675582722dc16a9cb95cf16"
	I1025 09:37:40.644979  341808 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:37:40.644987  341808 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:37:40.644992  341808 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:37:40.644996  341808 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:37:40.645002  341808 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:37:40.645007  341808 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:37:40.645009  341808 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:37:40.645012  341808 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:37:40.645026  341808 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:37:40.645032  341808 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:37:40.645034  341808 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:37:40.645037  341808 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:37:40.645039  341808 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:37:40.645042  341808 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:37:40.645046  341808 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:37:40.645051  341808 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:37:40.645055  341808 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:37:40.645058  341808 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:37:40.645065  341808 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:37:40.645070  341808 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:37:40.645073  341808 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:37:40.645075  341808 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:37:40.645088  341808 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:37:40.645093  341808 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:37:40.645095  341808 cri.go:89] found id: ""
	I1025 09:37:40.645135  341808 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:37:40.661306  341808 out.go:203] 
	W1025 09:37:40.662867  341808 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:37:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:37:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:37:40.662887  341808 out.go:285] * 
	* 
	W1025 09:37:40.666035  341808 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:37:40.667458  341808 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (149.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mhs6l" [36987ca4-8336-4a46-984d-985219ea3122] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003574907s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (266.727846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:17.893171  338432 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:17.894230  338432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:17.894244  338432 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:17.894248  338432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:17.894486  338432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:17.894854  338432 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:17.895267  338432 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:17.895291  338432 addons.go:606] checking whether the cluster is paused
	I1025 09:35:17.895395  338432 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:17.895411  338432 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:17.895795  338432 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:17.914502  338432 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:17.914569  338432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:17.933613  338432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:18.038469  338432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:18.038558  338432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:18.070821  338432 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:18.070844  338432 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:18.070848  338432 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:18.070851  338432 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:18.070854  338432 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:18.070863  338432 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:18.070866  338432 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:18.070868  338432 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:18.070871  338432 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:18.070882  338432 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:18.070885  338432 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:18.070887  338432 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:18.070890  338432 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:18.070893  338432 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:18.070895  338432 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:18.070899  338432 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:18.070902  338432 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:18.070906  338432 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:18.070908  338432 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:18.070910  338432 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:18.070913  338432 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:18.070915  338432 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:18.070917  338432 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:18.070920  338432 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:18.070922  338432 cri.go:89] found id: ""
	I1025 09:35:18.070965  338432 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:18.086484  338432 out.go:203] 
	W1025 09:35:18.087804  338432 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:18.087828  338432 out.go:285] * 
	* 
	W1025 09:35:18.090966  338432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:18.092558  338432 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.543491ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003720415s
addons_test.go:463: (dbg) Run:  kubectl --context addons-582494 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (268.300379ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:09.678417  336813 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:09.679305  336813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:09.679343  336813 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:09.679351  336813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:09.679533  336813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:09.679820  336813 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:09.680192  336813 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:09.680209  336813 addons.go:606] checking whether the cluster is paused
	I1025 09:35:09.680289  336813 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:09.680301  336813 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:09.680691  336813 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:09.699355  336813 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:09.699415  336813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:09.718950  336813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:09.819778  336813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:09.819866  336813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:09.856176  336813 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:09.856203  336813 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:09.856208  336813 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:09.856213  336813 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:09.856217  336813 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:09.856222  336813 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:09.856226  336813 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:09.856230  336813 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:09.856235  336813 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:09.856243  336813 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:09.856247  336813 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:09.856250  336813 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:09.856255  336813 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:09.856259  336813 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:09.856269  336813 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:09.856282  336813 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:09.856286  336813 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:09.856291  336813 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:09.856295  336813 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:09.856300  336813 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:09.856304  336813 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:09.856308  336813 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:09.856312  336813 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:09.856330  336813 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:09.856335  336813 cri.go:89] found id: ""
	I1025 09:35:09.856385  336813 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:09.873032  336813 out.go:203] 
	W1025 09:35:09.874515  336813 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:09.874538  336813 out.go:285] * 
	* 
	W1025 09:35:09.877790  336813 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:09.879114  336813 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 09:35:09.822266  325455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:35:09.825564  325455 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:35:09.825589  325455 kapi.go:107] duration metric: took 3.346367ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.355614ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-582494 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-582494 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [5d2efcb4-92c6-4d3f-a52f-e43dd718dfc3] Pending
helpers_test.go:352: "task-pv-pod" [5d2efcb4-92c6-4d3f-a52f-e43dd718dfc3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [5d2efcb4-92c6-4d3f-a52f-e43dd718dfc3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003831972s
addons_test.go:572: (dbg) Run:  kubectl --context addons-582494 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-582494 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-582494 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-582494 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-582494 delete pod task-pv-pod: (1.187484767s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-582494 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-582494 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-582494 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [981566de-a847-4cd8-a946-e77b4c548cb2] Pending
helpers_test.go:352: "task-pv-pod-restore" [981566de-a847-4cd8-a946-e77b4c548cb2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [981566de-a847-4cd8-a946-e77b4c548cb2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004510116s
addons_test.go:614: (dbg) Run:  kubectl --context addons-582494 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-582494 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-582494 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (262.859119ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:54.961603  339627 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:54.962572  339627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:54.962588  339627 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:54.962595  339627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:54.962855  339627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:54.963178  339627 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:54.963589  339627 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:54.963611  339627 addons.go:606] checking whether the cluster is paused
	I1025 09:35:54.963713  339627 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:54.963737  339627 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:54.964160  339627 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:54.983155  339627 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:54.983212  339627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:55.001834  339627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:55.106005  339627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:55.106120  339627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:55.137552  339627 cri.go:89] found id: "9133b79deac292e96b3912c2350995b2b6208a8b3675582722dc16a9cb95cf16"
	I1025 09:35:55.137579  339627 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:55.137585  339627 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:55.137590  339627 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:55.137595  339627 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:55.137599  339627 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:55.137603  339627 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:55.137605  339627 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:55.137609  339627 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:55.137616  339627 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:55.137620  339627 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:55.137625  339627 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:55.137634  339627 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:55.137639  339627 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:55.137643  339627 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:55.137653  339627 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:55.137660  339627 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:55.137668  339627 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:55.137672  339627 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:55.137676  339627 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:55.137680  339627 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:55.137684  339627 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:55.137690  339627 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:55.137697  339627 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:55.137706  339627 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:55.137713  339627 cri.go:89] found id: ""
	I1025 09:35:55.137766  339627 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:55.154276  339627 out.go:203] 
	W1025 09:35:55.155709  339627 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:55.155738  339627 out.go:285] * 
	* 
	W1025 09:35:55.158965  339627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:55.160744  339627 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (269.252486ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:55.230262  339690 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:55.231196  339690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:55.231218  339690 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:55.231223  339690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:55.231504  339690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:55.231816  339690 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:55.232190  339690 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:55.232213  339690 addons.go:606] checking whether the cluster is paused
	I1025 09:35:55.232299  339690 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:55.232314  339690 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:55.232742  339690 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:55.252564  339690 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:55.252621  339690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:55.272090  339690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:55.374858  339690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:55.374967  339690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:55.407283  339690 cri.go:89] found id: "9133b79deac292e96b3912c2350995b2b6208a8b3675582722dc16a9cb95cf16"
	I1025 09:35:55.407309  339690 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:55.407326  339690 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:55.407332  339690 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:55.407335  339690 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:55.407340  339690 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:55.407359  339690 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:55.407363  339690 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:55.407367  339690 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:55.407376  339690 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:55.407380  339690 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:55.407384  339690 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:55.407389  339690 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:55.407400  339690 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:55.407406  339690 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:55.407415  339690 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:55.407419  339690 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:55.407425  339690 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:55.407429  339690 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:55.407433  339690 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:55.407440  339690 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:55.407448  339690 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:55.407452  339690 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:55.407456  339690 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:55.407460  339690 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:55.407464  339690 cri.go:89] found id: ""
	I1025 09:35:55.407508  339690 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:55.423741  339690 out.go:203] 
	W1025 09:35:55.425027  339690 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:55.425054  339690 out.go:285] * 
	* 
	W1025 09:35:55.428308  339690 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:55.430021  339690 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-582494 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-582494 --alsologtostderr -v=1: exit status 11 (264.772267ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:34:58.311197  335543 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:34:58.312353  335543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:58.312368  335543 out.go:374] Setting ErrFile to fd 2...
	I1025 09:34:58.312374  335543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:58.312619  335543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:34:58.312995  335543 mustload.go:65] Loading cluster: addons-582494
	I1025 09:34:58.313446  335543 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:58.313468  335543 addons.go:606] checking whether the cluster is paused
	I1025 09:34:58.313574  335543 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:58.313590  335543 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:34:58.313994  335543 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:34:58.332994  335543 ssh_runner.go:195] Run: systemctl --version
	I1025 09:34:58.333065  335543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:34:58.352473  335543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:34:58.455726  335543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:34:58.455823  335543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:34:58.486635  335543 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:34:58.486662  335543 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:34:58.486667  335543 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:34:58.486672  335543 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:34:58.486675  335543 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:34:58.486679  335543 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:34:58.486682  335543 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:34:58.486686  335543 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:34:58.486690  335543 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:34:58.486707  335543 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:34:58.486710  335543 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:34:58.486712  335543 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:34:58.486715  335543 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:34:58.486717  335543 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:34:58.486720  335543 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:34:58.486724  335543 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:34:58.486727  335543 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:34:58.486735  335543 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:34:58.486737  335543 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:34:58.486739  335543 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:34:58.486742  335543 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:34:58.486744  335543 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:34:58.486747  335543 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:34:58.486750  335543 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:34:58.486758  335543 cri.go:89] found id: ""
	I1025 09:34:58.486803  335543 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:34:58.502943  335543 out.go:203] 
	W1025 09:34:58.504437  335543 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:34:58.504458  335543 out.go:285] * 
	* 
	W1025 09:34:58.507825  335543 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:34:58.509480  335543 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-582494 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-582494
helpers_test.go:243: (dbg) docker inspect addons-582494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227",
	        "Created": "2025-10-25T09:32:40.58689965Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:32:40.62334152Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/hostname",
	        "HostsPath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/hosts",
	        "LogPath": "/var/lib/docker/containers/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227/a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227-json.log",
	        "Name": "/addons-582494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-582494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-582494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a7ce438518590abcd5d536f30162cd83066b6f288f1c8f26ff6a111d80f7e227",
	                "LowerDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10a40b574ff84e32355b08c83c6a2e1e344be14f7dde75bab0523cd4850e1746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-582494",
	                "Source": "/var/lib/docker/volumes/addons-582494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-582494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-582494",
	                "name.minikube.sigs.k8s.io": "addons-582494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4020f793b3162eb0bb0e79b3984f3c5aad4f6a54e19a76f9936eb27f065c6406",
	            "SandboxKey": "/var/run/docker/netns/4020f793b316",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-582494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:49:78:8e:1e:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09e159133a12c1c4eda5dd1d02a15878cfb36d205e857ff1b7046b1a63057f54",
	                    "EndpointID": "79fdf4e4be84bf8a3ba17b737d040f83cd9c9c0902716d626a030afcfde419eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-582494",
	                        "a7ce43851859"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-582494 -n addons-582494
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-582494 logs -n 25: (1.226803895s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-278458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-278458   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-278458                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-278458   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -o=json --download-only -p download-only-731105 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-731105   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-731105                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-731105   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-278458                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-278458   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-731105                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-731105   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ --download-only -p download-docker-053726 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-053726 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ -p download-docker-053726                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-053726 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ --download-only -p binary-mirror-351445 --alsologtostderr --binary-mirror http://127.0.0.1:36611 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-351445   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ -p binary-mirror-351445                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-351445   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p addons-582494                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-582494          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ addons  │ disable dashboard -p addons-582494                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-582494          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p addons-582494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-582494          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ addons-582494 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-582494          │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-582494 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-582494          │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ addons  │ enable headlamp -p addons-582494 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-582494          │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:18.278840  326776 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:18.278978  326776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:18.279002  326776 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:18.279008  326776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:18.279258  326776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:32:18.279832  326776 out.go:368] Setting JSON to false
	I1025 09:32:18.280773  326776 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4487,"bootTime":1761380251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:32:18.280876  326776 start.go:141] virtualization: kvm guest
	I1025 09:32:18.283020  326776 out.go:179] * [addons-582494] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:32:18.284535  326776 notify.go:220] Checking for updates...
	I1025 09:32:18.284574  326776 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:32:18.286058  326776 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:18.287667  326776 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:32:18.289062  326776 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:32:18.290413  326776 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:32:18.291754  326776 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:18.293290  326776 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:18.318526  326776 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:32:18.318679  326776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:18.383598  326776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:18.372404563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:18.383731  326776 docker.go:318] overlay module found
	I1025 09:32:18.385723  326776 out.go:179] * Using the docker driver based on user configuration
	I1025 09:32:18.387147  326776 start.go:305] selected driver: docker
	I1025 09:32:18.387163  326776 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:18.387175  326776 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:18.387762  326776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:18.451941  326776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:18.441061842 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:18.452118  326776 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:18.452296  326776 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:32:18.454157  326776 out.go:179] * Using Docker driver with root privileges
	I1025 09:32:18.455819  326776 cni.go:84] Creating CNI manager for ""
	I1025 09:32:18.455885  326776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:18.455897  326776 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:18.455980  326776 start.go:349] cluster config:
	{Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 09:32:18.457503  326776 out.go:179] * Starting "addons-582494" primary control-plane node in "addons-582494" cluster
	I1025 09:32:18.458825  326776 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:18.460205  326776 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:18.461517  326776 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:18.461569  326776 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:32:18.461583  326776 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:18.461680  326776 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:32:18.461679  326776 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:18.461695  326776 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:18.462020  326776 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/config.json ...
	I1025 09:32:18.462051  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/config.json: {Name:mkb06601fc8d67ab1feb33e8665675381486554a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:18.481608  326776 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:18.481780  326776 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:32:18.481812  326776 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:32:18.481820  326776 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:32:18.481832  326776 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:32:18.481840  326776 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:32:32.779270  326776 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:32:32.779308  326776 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:32.779398  326776 start.go:360] acquireMachinesLock for addons-582494: {Name:mk7ae4df9f0d4b2c8062e32fc416860ac419156c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:32.779540  326776 start.go:364] duration metric: took 110.573µs to acquireMachinesLock for "addons-582494"
	I1025 09:32:32.779578  326776 start.go:93] Provisioning new machine with config: &{Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:32:32.779671  326776 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:32:32.781585  326776 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:32:32.781845  326776 start.go:159] libmachine.API.Create for "addons-582494" (driver="docker")
	I1025 09:32:32.781876  326776 client.go:168] LocalClient.Create starting
	I1025 09:32:32.782047  326776 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 09:32:32.853582  326776 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 09:32:33.251807  326776 cli_runner.go:164] Run: docker network inspect addons-582494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:32:33.269558  326776 cli_runner.go:211] docker network inspect addons-582494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:32:33.269645  326776 network_create.go:284] running [docker network inspect addons-582494] to gather additional debugging logs...
	I1025 09:32:33.269673  326776 cli_runner.go:164] Run: docker network inspect addons-582494
	W1025 09:32:33.287793  326776 cli_runner.go:211] docker network inspect addons-582494 returned with exit code 1
	I1025 09:32:33.287831  326776 network_create.go:287] error running [docker network inspect addons-582494]: docker network inspect addons-582494: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-582494 not found
	I1025 09:32:33.287845  326776 network_create.go:289] output of [docker network inspect addons-582494]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-582494 not found
	
	** /stderr **
	I1025 09:32:33.287938  326776 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:33.306521  326776 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018ec860}
	I1025 09:32:33.306574  326776 network_create.go:124] attempt to create docker network addons-582494 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:32:33.306624  326776 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-582494 addons-582494
	I1025 09:32:33.370986  326776 network_create.go:108] docker network addons-582494 192.168.49.0/24 created
	I1025 09:32:33.371015  326776 kic.go:121] calculated static IP "192.168.49.2" for the "addons-582494" container
	I1025 09:32:33.371074  326776 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:32:33.388528  326776 cli_runner.go:164] Run: docker volume create addons-582494 --label name.minikube.sigs.k8s.io=addons-582494 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:32:33.411916  326776 oci.go:103] Successfully created a docker volume addons-582494
	I1025 09:32:33.412014  326776 cli_runner.go:164] Run: docker run --rm --name addons-582494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-582494 --entrypoint /usr/bin/test -v addons-582494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:32:35.926085  326776 cli_runner.go:217] Completed: docker run --rm --name addons-582494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-582494 --entrypoint /usr/bin/test -v addons-582494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.513990099s)
	I1025 09:32:35.926126  326776 oci.go:107] Successfully prepared a docker volume addons-582494
	I1025 09:32:35.926144  326776 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:35.926169  326776 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:32:35.926282  326776 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-582494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:32:40.514678  326776 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-582494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.588340028s)
	I1025 09:32:40.514719  326776 kic.go:203] duration metric: took 4.588547297s to extract preloaded images to volume ...
	W1025 09:32:40.514827  326776 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:32:40.514872  326776 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:32:40.514952  326776 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:32:40.570572  326776 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-582494 --name addons-582494 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-582494 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-582494 --network addons-582494 --ip 192.168.49.2 --volume addons-582494:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:32:40.875070  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Running}}
	I1025 09:32:40.893236  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:32:40.912237  326776 cli_runner.go:164] Run: docker exec addons-582494 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:32:40.960676  326776 oci.go:144] the created container "addons-582494" has a running status.
	I1025 09:32:40.960709  326776 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa...
	I1025 09:32:41.202355  326776 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:32:41.232396  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:32:41.252739  326776 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:32:41.252759  326776 kic_runner.go:114] Args: [docker exec --privileged addons-582494 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:32:41.302258  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:32:41.321107  326776 machine.go:93] provisionDockerMachine start ...
	I1025 09:32:41.321235  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:41.339597  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:41.339892  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:41.339917  326776 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:32:41.485818  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-582494
	
	I1025 09:32:41.485859  326776 ubuntu.go:182] provisioning hostname "addons-582494"
	I1025 09:32:41.485941  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:41.505038  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:41.505295  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:41.505341  326776 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-582494 && echo "addons-582494" | sudo tee /etc/hostname
	I1025 09:32:41.659000  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-582494
	
	I1025 09:32:41.659090  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:41.677749  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:41.677962  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:41.677988  326776 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-582494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-582494/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-582494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:32:41.820552  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:32:41.820581  326776 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 09:32:41.820609  326776 ubuntu.go:190] setting up certificates
	I1025 09:32:41.820625  326776 provision.go:84] configureAuth start
	I1025 09:32:41.820693  326776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-582494
	I1025 09:32:41.839242  326776 provision.go:143] copyHostCerts
	I1025 09:32:41.839369  326776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 09:32:41.839538  326776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 09:32:41.839629  326776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 09:32:41.839705  326776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.addons-582494 san=[127.0.0.1 192.168.49.2 addons-582494 localhost minikube]
	I1025 09:32:42.016963  326776 provision.go:177] copyRemoteCerts
	I1025 09:32:42.017028  326776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:32:42.017065  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.036113  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.139751  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:32:42.161296  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:32:42.180578  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:32:42.198586  326776 provision.go:87] duration metric: took 377.940787ms to configureAuth
	I1025 09:32:42.198616  326776 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:32:42.198807  326776 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:32:42.198910  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.217627  326776 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:42.217913  326776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 09:32:42.217937  326776 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:32:42.483013  326776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:32:42.483034  326776 machine.go:96] duration metric: took 1.161899629s to provisionDockerMachine
	I1025 09:32:42.483046  326776 client.go:171] duration metric: took 9.701159437s to LocalClient.Create
	I1025 09:32:42.483072  326776 start.go:167] duration metric: took 9.701227109s to libmachine.API.Create "addons-582494"
	I1025 09:32:42.483081  326776 start.go:293] postStartSetup for "addons-582494" (driver="docker")
	I1025 09:32:42.483096  326776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:32:42.483154  326776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:32:42.483195  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.502080  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.605711  326776 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:32:42.609641  326776 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:32:42.609677  326776 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:32:42.609693  326776 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 09:32:42.609770  326776 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 09:32:42.609803  326776 start.go:296] duration metric: took 126.715685ms for postStartSetup
	I1025 09:32:42.610128  326776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-582494
	I1025 09:32:42.628475  326776 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/config.json ...
	I1025 09:32:42.628761  326776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:32:42.628802  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.647151  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.746014  326776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:32:42.750861  326776 start.go:128] duration metric: took 9.971165938s to createHost
	I1025 09:32:42.750894  326776 start.go:83] releasing machines lock for "addons-582494", held for 9.971336583s
	I1025 09:32:42.750963  326776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-582494
	I1025 09:32:42.769421  326776 ssh_runner.go:195] Run: cat /version.json
	I1025 09:32:42.769477  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.769493  326776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:32:42.769564  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:32:42.788772  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.789068  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:32:42.937282  326776 ssh_runner.go:195] Run: systemctl --version
	I1025 09:32:42.944219  326776 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:32:42.981972  326776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:32:42.986927  326776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:32:42.987004  326776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:32:43.015475  326776 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:32:43.015505  326776 start.go:495] detecting cgroup driver to use...
	I1025 09:32:43.015546  326776 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:32:43.015607  326776 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:32:43.033752  326776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:32:43.046726  326776 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:32:43.046790  326776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:32:43.064699  326776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:32:43.083297  326776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:32:43.164950  326776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:32:43.253090  326776 docker.go:234] disabling docker service ...
	I1025 09:32:43.253160  326776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:32:43.274205  326776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:32:43.288246  326776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:32:43.376132  326776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:32:43.459427  326776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:32:43.473153  326776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:32:43.488537  326776 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:32:43.488597  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.499839  326776 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:32:43.499903  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.509753  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.519069  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.528301  326776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:32:43.536623  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.545490  326776 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.559185  326776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:43.568975  326776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:32:43.576788  326776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:32:43.584526  326776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:43.663425  326776 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:32:43.773300  326776 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:32:43.773424  326776 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:32:43.777585  326776 start.go:563] Will wait 60s for crictl version
	I1025 09:32:43.777650  326776 ssh_runner.go:195] Run: which crictl
	I1025 09:32:43.781377  326776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:32:43.807809  326776 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:32:43.807932  326776 ssh_runner.go:195] Run: crio --version
	I1025 09:32:43.839882  326776 ssh_runner.go:195] Run: crio --version
	I1025 09:32:43.872630  326776 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:32:43.873764  326776 cli_runner.go:164] Run: docker network inspect addons-582494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:43.892077  326776 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:32:43.896591  326776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:43.907339  326776 kubeadm.go:883] updating cluster {Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:32:43.907476  326776 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:43.907526  326776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:32:43.943679  326776 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:32:43.943701  326776 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:32:43.943755  326776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:32:43.970106  326776 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:32:43.970137  326776 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:32:43.970146  326776 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 09:32:43.970283  326776 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-582494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:32:43.970374  326776 ssh_runner.go:195] Run: crio config
	I1025 09:32:44.017446  326776 cni.go:84] Creating CNI manager for ""
	I1025 09:32:44.017474  326776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:44.017498  326776 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:32:44.017522  326776 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-582494 NodeName:addons-582494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:32:44.017640  326776 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-582494"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:32:44.017713  326776 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:32:44.026433  326776 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:32:44.026505  326776 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:32:44.034653  326776 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 09:32:44.047572  326776 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:32:44.063444  326776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 09:32:44.076801  326776 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:32:44.080590  326776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:44.090755  326776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:44.175386  326776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:32:44.205262  326776 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494 for IP: 192.168.49.2
	I1025 09:32:44.205290  326776 certs.go:195] generating shared ca certs ...
	I1025 09:32:44.205311  326776 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.205478  326776 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 09:32:44.361003  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt ...
	I1025 09:32:44.361037  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt: {Name:mk8bdce1ee12ddd552187c0d948bc8faa166349d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.362108  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key ...
	I1025 09:32:44.362134  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key: {Name:mkeb028f943d6e5f4c0f71a867aa7d09d82dd086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.362232  326776 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 09:32:44.503228  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt ...
	I1025 09:32:44.503258  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt: {Name:mkdc5eec83a4ed1db9de64e01bce3a9564f328dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.503452  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key ...
	I1025 09:32:44.503463  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key: {Name:mk0132d56842ddb86bc075b013ce7da7228f9954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.503537  326776 certs.go:257] generating profile certs ...
	I1025 09:32:44.503599  326776 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.key
	I1025 09:32:44.503614  326776 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt with IP's: []
	I1025 09:32:44.719874  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt ...
	I1025 09:32:44.719919  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: {Name:mk073c9b62cf012daa3bf0b54b9ac7b3044f5ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.720144  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.key ...
	I1025 09:32:44.720161  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.key: {Name:mkf512384112ba587ed18c996619ac2d8db2d3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.720275  326776 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9
	I1025 09:32:44.720305  326776 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:32:44.925498  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9 ...
	I1025 09:32:44.925530  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9: {Name:mk1e8803e11f4bc0fb40a3388703af7c1ae56fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.926493  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9 ...
	I1025 09:32:44.926522  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9: {Name:mk00f6aca9f2620b0fdaa9ab574e1849f36a5262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.926694  326776 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt.bb4145d9 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt
	I1025 09:32:44.926803  326776 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key.bb4145d9 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key
	I1025 09:32:44.926876  326776 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key
	I1025 09:32:44.926904  326776 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt with IP's: []
	I1025 09:32:44.975058  326776 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt ...
	I1025 09:32:44.975096  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt: {Name:mk7bf22fc168f20e56262dafad777ba2ef7c0f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.975340  326776 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key ...
	I1025 09:32:44.975367  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key: {Name:mk3a2a3843a6ae3d2d57e4ea396646616192104d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:44.975674  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:32:44.975723  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:32:44.975761  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:32:44.975809  326776 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 09:32:44.976582  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:32:44.996488  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:32:45.014945  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:32:45.033076  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:32:45.051471  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:32:45.069817  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:32:45.088373  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:32:45.106990  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:32:45.125231  326776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:32:45.146750  326776 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:32:45.160349  326776 ssh_runner.go:195] Run: openssl version
	I1025 09:32:45.167211  326776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:32:45.179663  326776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:45.184080  326776 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:45.184153  326776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:45.219141  326776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:32:45.228985  326776 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:32:45.233364  326776 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:32:45.233427  326776 kubeadm.go:400] StartCluster: {Name:addons-582494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-582494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:45.233538  326776 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:32:45.233602  326776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:32:45.261845  326776 cri.go:89] found id: ""
	I1025 09:32:45.261928  326776 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:32:45.270470  326776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:32:45.278750  326776 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:32:45.278821  326776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:32:45.287169  326776 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:32:45.287187  326776 kubeadm.go:157] found existing configuration files:
	
	I1025 09:32:45.287234  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:32:45.295487  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:32:45.295560  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:32:45.303554  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:32:45.311732  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:32:45.311800  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:32:45.319492  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:32:45.327690  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:32:45.327740  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:32:45.335500  326776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:32:45.343600  326776 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:32:45.343699  326776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:32:45.351592  326776 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:32:45.413863  326776 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:32:45.473006  326776 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:32:55.417158  326776 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:32:55.417213  326776 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:32:55.417352  326776 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:32:55.417404  326776 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:32:55.417458  326776 kubeadm.go:318] OS: Linux
	I1025 09:32:55.417513  326776 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:32:55.417559  326776 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:32:55.417601  326776 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:32:55.417668  326776 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:32:55.417722  326776 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:32:55.417804  326776 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:32:55.417875  326776 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:32:55.417945  326776 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:32:55.418032  326776 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:32:55.418122  326776 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:32:55.418212  326776 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:32:55.418337  326776 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:32:55.419967  326776 out.go:252]   - Generating certificates and keys ...
	I1025 09:32:55.420043  326776 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:32:55.420123  326776 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:32:55.420195  326776 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:32:55.420245  326776 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:32:55.420297  326776 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:32:55.420375  326776 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:32:55.420423  326776 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:32:55.420521  326776 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-582494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:32:55.420566  326776 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:32:55.420671  326776 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-582494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:32:55.420731  326776 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:32:55.420802  326776 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:32:55.420842  326776 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:32:55.420893  326776 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:32:55.420945  326776 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:32:55.420998  326776 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:32:55.421046  326776 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:32:55.421107  326776 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:32:55.421167  326776 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:32:55.421308  326776 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:32:55.421429  326776 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:32:55.422654  326776 out.go:252]   - Booting up control plane ...
	I1025 09:32:55.422773  326776 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:32:55.422900  326776 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:32:55.423015  326776 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:32:55.423226  326776 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:32:55.423386  326776 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:32:55.423532  326776 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:32:55.423619  326776 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:32:55.423659  326776 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:32:55.423817  326776 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:32:55.423973  326776 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:32:55.424047  326776 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001178678s
	I1025 09:32:55.424182  326776 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:32:55.424302  326776 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:32:55.424443  326776 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:32:55.424552  326776 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:32:55.424664  326776 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.374388763s
	I1025 09:32:55.424763  326776 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.787703669s
	I1025 09:32:55.424871  326776 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502158288s
	I1025 09:32:55.425012  326776 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:32:55.425167  326776 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:32:55.425246  326776 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:32:55.425520  326776 kubeadm.go:318] [mark-control-plane] Marking the node addons-582494 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:32:55.425605  326776 kubeadm.go:318] [bootstrap-token] Using token: i5mo7j.cxciqzlypbk10ivk
	I1025 09:32:55.427024  326776 out.go:252]   - Configuring RBAC rules ...
	I1025 09:32:55.427147  326776 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:32:55.427271  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:32:55.427444  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:32:55.427613  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:32:55.427752  326776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:32:55.427870  326776 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:32:55.428017  326776 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:32:55.428095  326776 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:32:55.428169  326776 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:32:55.428178  326776 kubeadm.go:318] 
	I1025 09:32:55.428244  326776 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:32:55.428251  326776 kubeadm.go:318] 
	I1025 09:32:55.428360  326776 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:32:55.428379  326776 kubeadm.go:318] 
	I1025 09:32:55.428409  326776 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:32:55.428474  326776 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:32:55.428524  326776 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:32:55.428530  326776 kubeadm.go:318] 
	I1025 09:32:55.428583  326776 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:32:55.428590  326776 kubeadm.go:318] 
	I1025 09:32:55.428633  326776 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:32:55.428639  326776 kubeadm.go:318] 
	I1025 09:32:55.428688  326776 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:32:55.428754  326776 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:32:55.428821  326776 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:32:55.428826  326776 kubeadm.go:318] 
	I1025 09:32:55.428909  326776 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:32:55.429013  326776 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:32:55.429028  326776 kubeadm.go:318] 
	I1025 09:32:55.429140  326776 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token i5mo7j.cxciqzlypbk10ivk \
	I1025 09:32:55.429266  326776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 09:32:55.429313  326776 kubeadm.go:318] 	--control-plane 
	I1025 09:32:55.429337  326776 kubeadm.go:318] 
	I1025 09:32:55.429444  326776 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:32:55.429453  326776 kubeadm.go:318] 
	I1025 09:32:55.429565  326776 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token i5mo7j.cxciqzlypbk10ivk \
	I1025 09:32:55.429756  326776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 09:32:55.429780  326776 cni.go:84] Creating CNI manager for ""
	I1025 09:32:55.429787  326776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:55.431084  326776 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:32:55.432235  326776 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:32:55.437050  326776 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:32:55.437068  326776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:32:55.450959  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:32:55.661119  326776 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:32:55.661217  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:55.661245  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-582494 minikube.k8s.io/updated_at=2025_10_25T09_32_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-582494 minikube.k8s.io/primary=true
	I1025 09:32:55.672958  326776 ops.go:34] apiserver oom_adj: -16
	I1025 09:32:55.740719  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:56.240913  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:56.741511  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:57.241193  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:57.740825  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:58.241159  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:58.741396  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:59.241071  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:32:59.741394  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:00.241553  326776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:00.320295  326776 kubeadm.go:1113] duration metric: took 4.659153612s to wait for elevateKubeSystemPrivileges
	I1025 09:33:00.320360  326776 kubeadm.go:402] duration metric: took 15.086941359s to StartCluster
	I1025 09:33:00.320385  326776 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:00.321202  326776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:33:00.321678  326776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:00.321872  326776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:33:00.321909  326776 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:33:00.321987  326776 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:33:00.322115  326776 addons.go:69] Setting yakd=true in profile "addons-582494"
	I1025 09:33:00.322124  326776 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-582494"
	I1025 09:33:00.322144  326776 addons.go:238] Setting addon yakd=true in "addons-582494"
	I1025 09:33:00.322155  326776 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-582494"
	I1025 09:33:00.322163  326776 addons.go:69] Setting metrics-server=true in profile "addons-582494"
	I1025 09:33:00.322194  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322194  326776 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:00.322206  326776 addons.go:69] Setting registry-creds=true in profile "addons-582494"
	I1025 09:33:00.322207  326776 addons.go:238] Setting addon metrics-server=true in "addons-582494"
	I1025 09:33:00.322218  326776 addons.go:238] Setting addon registry-creds=true in "addons-582494"
	I1025 09:33:00.322229  326776 addons.go:69] Setting cloud-spanner=true in profile "addons-582494"
	I1025 09:33:00.322237  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322243  326776 addons.go:238] Setting addon cloud-spanner=true in "addons-582494"
	I1025 09:33:00.322247  326776 addons.go:69] Setting storage-provisioner=true in profile "addons-582494"
	I1025 09:33:00.322265  326776 addons.go:238] Setting addon storage-provisioner=true in "addons-582494"
	I1025 09:33:00.322275  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322285  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322395  326776 addons.go:69] Setting ingress-dns=true in profile "addons-582494"
	I1025 09:33:00.322414  326776 addons.go:238] Setting addon ingress-dns=true in "addons-582494"
	I1025 09:33:00.322447  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322820  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322832  326776 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-582494"
	I1025 09:33:00.322844  326776 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-582494"
	I1025 09:33:00.322849  326776 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-582494"
	I1025 09:33:00.322873  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322889  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322890  326776 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-582494"
	I1025 09:33:00.322917  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.323107  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.323161  326776 addons.go:69] Setting default-storageclass=true in profile "addons-582494"
	I1025 09:33:00.323186  326776 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-582494"
	I1025 09:33:00.323342  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.323500  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.323972  326776 addons.go:69] Setting ingress=true in profile "addons-582494"
	I1025 09:33:00.323996  326776 addons.go:238] Setting addon ingress=true in "addons-582494"
	I1025 09:33:00.324043  326776 addons.go:69] Setting volumesnapshots=true in profile "addons-582494"
	I1025 09:33:00.324064  326776 addons.go:238] Setting addon volumesnapshots=true in "addons-582494"
	I1025 09:33:00.324095  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322180  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.324298  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.324764  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.325057  326776 addons.go:69] Setting volcano=true in profile "addons-582494"
	I1025 09:33:00.325109  326776 addons.go:238] Setting addon volcano=true in "addons-582494"
	I1025 09:33:00.325155  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.325668  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322189  326776 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-582494"
	I1025 09:33:00.326165  326776 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-582494"
	I1025 09:33:00.326200  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.326346  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.326518  326776 out.go:179] * Verifying Kubernetes components...
	I1025 09:33:00.322832  326776 addons.go:69] Setting inspektor-gadget=true in profile "addons-582494"
	I1025 09:33:00.326705  326776 addons.go:238] Setting addon inspektor-gadget=true in "addons-582494"
	I1025 09:33:00.326739  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322820  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322239  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.322209  326776 addons.go:69] Setting gcp-auth=true in profile "addons-582494"
	I1025 09:33:00.327122  326776 mustload.go:65] Loading cluster: addons-582494
	I1025 09:33:00.322820  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.322198  326776 addons.go:69] Setting registry=true in profile "addons-582494"
	I1025 09:33:00.327406  326776 addons.go:238] Setting addon registry=true in "addons-582494"
	I1025 09:33:00.327436  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.328655  326776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:00.335771  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.336127  326776 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:00.336890  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.338817  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.337355  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.337783  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.339087  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.363766  326776 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:33:00.363932  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:33:00.363844  326776 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:33:00.365294  326776 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:00.366013  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:33:00.366094  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.365295  326776 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:00.366259  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:33:00.366450  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.367668  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:33:00.368908  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:33:00.370109  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:33:00.371518  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:33:00.372661  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:33:00.375395  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:33:00.376890  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:33:00.378892  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:33:00.378916  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:33:00.379078  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.400992  326776 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-582494"
	I1025 09:33:00.422926  326776 host.go:66] Checking if "addons-582494" exists ...
	W1025 09:33:00.424724  326776 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:33:00.401149  326776 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:33:00.427200  326776 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:33:00.427224  326776 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:33:00.427293  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.401330  326776 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:33:00.428501  326776 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:33:00.421601  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.428822  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.430570  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:33:00.430600  326776 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:33:00.430667  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.433280  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.433468  326776 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:00.433489  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:33:00.433545  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.438341  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:00.442724  326776 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:33:00.442820  326776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:33:00.444524  326776 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:33:00.444628  326776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:00.444640  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:33:00.444712  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.444885  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:33:00.444931  326776 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:33:00.446299  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:33:00.446335  326776 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:33:00.446396  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.446471  326776 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:33:00.446526  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:00.446583  326776 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:33:00.446592  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:33:00.446644  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.446900  326776 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:33:00.447576  326776 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:00.447593  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:33:00.447645  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.447822  326776 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:00.447838  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:33:00.447885  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.449177  326776 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:00.449195  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:33:00.449244  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.452088  326776 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:33:00.453312  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.453746  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:33:00.453762  326776 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:33:00.453824  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.490852  326776 addons.go:238] Setting addon default-storageclass=true in "addons-582494"
	I1025 09:33:00.493434  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:00.494585  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:00.512082  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.512244  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.513031  326776 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:33:00.513795  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.515141  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.519683  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.522482  326776 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:33:00.524838  326776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:00.524902  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:33:00.524998  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.527857  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.536517  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.536531  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.552218  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.557158  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	W1025 09:33:00.558819  326776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:00.558917  326776 retry.go:31] will retry after 313.093398ms: ssh: handshake failed: EOF
	I1025 09:33:00.563586  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	W1025 09:33:00.564761  326776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:00.564837  326776 retry.go:31] will retry after 264.747724ms: ssh: handshake failed: EOF
	I1025 09:33:00.566459  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.567493  326776 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:00.567515  326776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:33:00.567573  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:00.591373  326776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:33:00.591511  326776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:00.602794  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:00.626541  326776 node_ready.go:35] waiting up to 6m0s for node "addons-582494" to be "Ready" ...
	I1025 09:33:00.678355  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:33:00.678449  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:33:00.678457  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:00.679483  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:00.704832  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:00.726152  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:33:00.726254  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:33:00.731722  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:33:00.731798  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:33:00.743252  326776 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:33:00.743278  326776 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:33:00.753260  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:33:00.753288  326776 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:33:00.755191  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:00.768782  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:00.768920  326776 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:33:00.768944  326776 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:33:00.770932  326776 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:00.771001  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:33:00.772988  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:33:00.773010  326776 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:33:00.774371  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:00.775066  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:33:00.775087  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:33:00.782253  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:00.798641  326776 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:00.798666  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:33:00.804442  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:33:00.804470  326776 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:33:00.823109  326776 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:33:00.823140  326776 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:33:00.823197  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:00.826939  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:33:00.826964  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:33:00.852122  326776 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:00.852152  326776 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:33:00.867996  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:00.873758  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:33:00.873874  326776 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:33:00.877847  326776 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:33:00.877926  326776 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:33:00.884801  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:33:00.884889  326776 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:33:00.930195  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:00.942304  326776 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:00.942339  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:33:00.960508  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:33:00.960596  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:33:00.982251  326776 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:33:00.982358  326776 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:33:00.984734  326776 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:33:01.024941  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:33:01.024967  326776 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:33:01.044493  326776 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:01.044590  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:33:01.060540  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:01.100374  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:01.110833  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:01.112478  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:01.119444  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:33:01.119470  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:33:01.173195  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:33:01.173224  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:33:01.248851  326776 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:33:01.248888  326776 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:33:01.313514  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:33:01.492302  326776 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-582494" context rescaled to 1 replicas
	I1025 09:33:02.138773  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.460184606s)
	I1025 09:33:02.138822  326776 addons.go:479] Verifying addon ingress=true in "addons-582494"
	I1025 09:33:02.138873  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.459359805s)
	I1025 09:33:02.138978  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.434059505s)
	I1025 09:33:02.139048  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.383778129s)
	I1025 09:33:02.139103  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.370295313s)
	I1025 09:33:02.139157  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.36475525s)
	I1025 09:33:02.139187  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.356910223s)
	I1025 09:33:02.139303  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.316073804s)
	I1025 09:33:02.139374  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.271280895s)
	W1025 09:33:02.139387  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:02.139397  326776 addons.go:479] Verifying addon registry=true in "addons-582494"
	I1025 09:33:02.139407  326776 retry.go:31] will retry after 321.319405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:02.139488  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.20926408s)
	I1025 09:33:02.139618  326776 addons.go:479] Verifying addon metrics-server=true in "addons-582494"
	I1025 09:33:02.139539  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.078972412s)
	I1025 09:33:02.141065  326776 out.go:179] * Verifying ingress addon...
	I1025 09:33:02.141987  326776 out.go:179] * Verifying registry addon...
	I1025 09:33:02.142219  326776 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-582494 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:33:02.143639  326776 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:33:02.144355  326776 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:33:02.147190  326776 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:33:02.147275  326776 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:33:02.147295  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:02.461267  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:33:02.636272  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:02.654037  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:02.654277  326776 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:33:02.654304  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:02.695148  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.594719436s)
	W1025 09:33:02.695215  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:02.695238  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.584366099s)
	I1025 09:33:02.695260  326776 retry.go:31] will retry after 237.720522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:02.695338  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.582813986s)
	I1025 09:33:02.695572  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.382016993s)
	I1025 09:33:02.695599  326776 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-582494"
	I1025 09:33:02.697768  326776 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:33:02.702835  326776 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:33:02.708385  326776 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:33:02.708412  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:02.933915  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1025 09:33:03.125250  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:03.125290  326776 retry.go:31] will retry after 533.24161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:03.147691  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:03.147758  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:03.206284  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:03.647468  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:03.647824  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:03.658755  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:03.749001  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:04.147584  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:04.147777  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:04.206931  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:04.647380  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:04.647496  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:04.705896  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:05.130533  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:05.147893  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:05.148120  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:05.206132  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:05.455975  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.522005459s)
	I1025 09:33:05.456094  326776 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.797303505s)
	W1025 09:33:05.456138  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:05.456165  326776 retry.go:31] will retry after 313.94334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:05.647064  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:05.647091  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:05.747878  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:05.770938  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:06.147281  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:06.147366  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:06.206608  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:06.334201  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:06.334246  326776 retry.go:31] will retry after 771.808246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:06.647595  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:06.647780  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:06.707168  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:07.106689  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:07.148035  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:07.148188  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:07.206277  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:07.629701  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:07.647390  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:07.647492  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:33:07.668986  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:07.669022  326776 retry.go:31] will retry after 1.487519533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:07.748832  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:08.042596  326776 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:33:08.042665  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:08.061726  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:08.147491  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:08.147542  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:08.174526  326776 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:33:08.188957  326776 addons.go:238] Setting addon gcp-auth=true in "addons-582494"
	I1025 09:33:08.189029  326776 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:33:08.189447  326776 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:33:08.206910  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:08.208467  326776 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:33:08.208522  326776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:33:08.227439  326776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:33:08.326243  326776 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:08.328064  326776 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:33:08.329605  326776 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:33:08.329628  326776 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:33:08.344125  326776 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:33:08.344149  326776 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:33:08.358092  326776 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:08.358121  326776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:33:08.372249  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:08.647762  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:08.647828  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:08.693418  326776 addons.go:479] Verifying addon gcp-auth=true in "addons-582494"
	I1025 09:33:08.694941  326776 out.go:179] * Verifying gcp-auth addon...
	I1025 09:33:08.696928  326776 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:33:08.748307  326776 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:33:08.748353  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:08.748329  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:09.147625  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:09.147800  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:09.156963  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:09.201041  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:09.205807  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:09.630219  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:09.647441  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:09.647615  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:09.700780  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:09.705742  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:09.732987  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:09.733026  326776 retry.go:31] will retry after 1.844626677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:10.148000  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:10.148018  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:10.200906  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:10.206624  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:10.647251  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:10.647518  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:10.700337  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:10.706089  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:11.147745  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:11.147744  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:11.200596  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:11.206485  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:11.577833  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:33:11.630399  326776 node_ready.go:57] node "addons-582494" has "Ready":"False" status (will retry)
	I1025 09:33:11.647766  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:11.647825  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:11.701210  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:11.705978  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:12.130396  326776 node_ready.go:49] node "addons-582494" is "Ready"
	I1025 09:33:12.130436  326776 node_ready.go:38] duration metric: took 11.503342705s for node "addons-582494" to be "Ready" ...
	I1025 09:33:12.130457  326776 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:33:12.130523  326776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:33:12.150050  326776 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:33:12.150075  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:12.150618  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:12.251013  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:12.251081  326776 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:33:12.251093  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:12.288199  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:12.288249  326776 retry.go:31] will retry after 2.984999898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:12.288476  326776 api_server.go:72] duration metric: took 11.966523121s to wait for apiserver process to appear ...
	I1025 09:33:12.288500  326776 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:33:12.288525  326776 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:33:12.295017  326776 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:33:12.296371  326776 api_server.go:141] control plane version: v1.34.1
	I1025 09:33:12.296461  326776 api_server.go:131] duration metric: took 7.891368ms to wait for apiserver health ...
	I1025 09:33:12.296495  326776 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:33:12.352756  326776 system_pods.go:59] 20 kube-system pods found
	I1025 09:33:12.352888  326776 system_pods.go:61] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.352906  326776 system_pods.go:61] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:12.352917  326776 system_pods.go:61] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.352941  326776 system_pods.go:61] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.352950  326776 system_pods.go:61] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.352955  326776 system_pods.go:61] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.352961  326776 system_pods.go:61] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.352965  326776 system_pods.go:61] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.352970  326776 system_pods.go:61] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.352978  326776 system_pods.go:61] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.352984  326776 system_pods.go:61] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.352989  326776 system_pods.go:61] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.352996  326776 system_pods.go:61] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.353004  326776 system_pods.go:61] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.353011  326776 system_pods.go:61] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.353018  326776 system_pods.go:61] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.353026  326776 system_pods.go:61] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.353037  326776 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.353044  326776 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.353051  326776 system_pods.go:61] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:12.353060  326776 system_pods.go:74] duration metric: took 56.557245ms to wait for pod list to return data ...
	I1025 09:33:12.353074  326776 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:33:12.355665  326776 default_sa.go:45] found service account: "default"
	I1025 09:33:12.355694  326776 default_sa.go:55] duration metric: took 2.613422ms for default service account to be created ...
	I1025 09:33:12.355707  326776 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:33:12.453439  326776 system_pods.go:86] 20 kube-system pods found
	I1025 09:33:12.453483  326776 system_pods.go:89] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.453497  326776 system_pods.go:89] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:12.453509  326776 system_pods.go:89] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.453519  326776 system_pods.go:89] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.453528  326776 system_pods.go:89] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.453535  326776 system_pods.go:89] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.453544  326776 system_pods.go:89] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.453555  326776 system_pods.go:89] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.453577  326776 system_pods.go:89] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.453595  326776 system_pods.go:89] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.453604  326776 system_pods.go:89] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.453616  326776 system_pods.go:89] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.453625  326776 system_pods.go:89] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.453634  326776 system_pods.go:89] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.453643  326776 system_pods.go:89] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.453653  326776 system_pods.go:89] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.453666  326776 system_pods.go:89] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.453680  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.453693  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.453706  326776 system_pods.go:89] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:12.453733  326776 retry.go:31] will retry after 224.907087ms: missing components: kube-dns
	I1025 09:33:12.647801  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:12.647856  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:12.683027  326776 system_pods.go:86] 20 kube-system pods found
	I1025 09:33:12.683065  326776 system_pods.go:89] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.683075  326776 system_pods.go:89] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:12.683086  326776 system_pods.go:89] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.683094  326776 system_pods.go:89] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.683102  326776 system_pods.go:89] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.683108  326776 system_pods.go:89] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.683115  326776 system_pods.go:89] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.683122  326776 system_pods.go:89] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.683128  326776 system_pods.go:89] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.683136  326776 system_pods.go:89] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.683145  326776 system_pods.go:89] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.683152  326776 system_pods.go:89] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.683160  326776 system_pods.go:89] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.683170  326776 system_pods.go:89] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.683180  326776 system_pods.go:89] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.683189  326776 system_pods.go:89] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.683200  326776 system_pods.go:89] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.683212  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.683225  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.683233  326776 system_pods.go:89] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:12.683256  326776 retry.go:31] will retry after 240.596808ms: missing components: kube-dns
	I1025 09:33:12.700625  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:12.706728  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:12.930437  326776 system_pods.go:86] 20 kube-system pods found
	I1025 09:33:12.930476  326776 system_pods.go:89] "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:33:12.930484  326776 system_pods.go:89] "coredns-66bc5c9577-x52sm" [1283554a-bcf8-4dbf-a254-32bae102029a] Running
	I1025 09:33:12.930495  326776 system_pods.go:89] "csi-hostpath-attacher-0" [ed192743-8674-4c36-910a-4f221b5c34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:33:12.930503  326776 system_pods.go:89] "csi-hostpath-resizer-0" [6663357e-c89f-4029-a4c1-81a7efd0aae8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:33:12.930512  326776 system_pods.go:89] "csi-hostpathplugin-s5v6k" [88063809-7a2e-4284-9e35-0f92608ae5d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:33:12.930519  326776 system_pods.go:89] "etcd-addons-582494" [53a95eb2-58c4-4595-bd03-e8f5f4dc3ade] Running
	I1025 09:33:12.930534  326776 system_pods.go:89] "kindnet-dkqbp" [374e3d3d-59fa-43d3-b177-cd364ff22112] Running
	I1025 09:33:12.930544  326776 system_pods.go:89] "kube-apiserver-addons-582494" [b5ae9e54-eea9-4505-abde-4cd7985ad6ec] Running
	I1025 09:33:12.930559  326776 system_pods.go:89] "kube-controller-manager-addons-582494" [4a44559f-cdc7-4d75-98fb-184789915356] Running
	I1025 09:33:12.930575  326776 system_pods.go:89] "kube-ingress-dns-minikube" [6ef67c79-353a-44ad-ac94-b0700ae8f69e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:33:12.930585  326776 system_pods.go:89] "kube-proxy-fmsgh" [de3dc975-aa0c-4ff8-bb28-52aa41dbb0a0] Running
	I1025 09:33:12.930591  326776 system_pods.go:89] "kube-scheduler-addons-582494" [6d49ca4e-2b8e-47e4-aab1-129f95c38563] Running
	I1025 09:33:12.930602  326776 system_pods.go:89] "metrics-server-85b7d694d7-wnq6w" [5f738d19-fe71-4220-81a0-135edefc3540] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:33:12.930611  326776 system_pods.go:89] "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:33:12.930622  326776 system_pods.go:89] "registry-6b586f9694-jftz9" [8a2e1780-bcf0-4e37-98b1-fef42642e586] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:33:12.930634  326776 system_pods.go:89] "registry-creds-764b6fb674-n9dsg" [fe140945-faea-411c-88be-84e6d8ba91bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:33:12.930642  326776 system_pods.go:89] "registry-proxy-vjtwb" [0113a3a7-cfbd-4a9a-a392-206524677a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:33:12.930649  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b7qwq" [a47a01ea-848f-4bd6-99f9-6df69490ea84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.930661  326776 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kww9w" [c1f07f89-6325-491d-8714-7ca0cac5a197] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:33:12.930666  326776 system_pods.go:89] "storage-provisioner" [58c8e38c-db2a-4b1d-ab4b-7d71e84b5f8a] Running
	I1025 09:33:12.930689  326776 system_pods.go:126] duration metric: took 574.973901ms to wait for k8s-apps to be running ...
	I1025 09:33:12.930702  326776 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:33:12.930766  326776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:33:12.949614  326776 system_svc.go:56] duration metric: took 18.896651ms WaitForService to wait for kubelet
	I1025 09:33:12.949655  326776 kubeadm.go:586] duration metric: took 12.627712487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:33:12.949683  326776 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:33:12.953383  326776 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:33:12.953418  326776 node_conditions.go:123] node cpu capacity is 8
	I1025 09:33:12.953434  326776 node_conditions.go:105] duration metric: took 3.744419ms to run NodePressure ...
	I1025 09:33:12.953451  326776 start.go:241] waiting for startup goroutines ...
	I1025 09:33:13.148185  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:13.148576  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:13.201056  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:13.206526  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:13.647640  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:13.647674  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:13.701152  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:13.706779  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:14.149201  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:14.149251  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:14.200696  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:14.207178  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:14.647865  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:14.648021  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:14.701140  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:14.706060  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:15.148443  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:15.148496  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:15.200692  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:15.206510  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:15.273426  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:15.647947  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:15.648017  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:15.700770  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:15.707483  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:15.975859  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:15.975896  326776 retry.go:31] will retry after 4.599527408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:16.147453  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:16.147637  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:16.200562  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:16.206446  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:16.648710  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:16.648737  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:16.701161  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:16.706659  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:17.148127  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:17.148243  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:17.202078  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:17.206851  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:17.647307  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:17.647480  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:17.700232  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:17.706178  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:18.148264  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:18.148269  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:18.249065  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:18.249184  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:18.647959  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:18.648192  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:18.701293  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:18.706583  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:19.148030  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:19.148164  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:19.200656  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:19.206741  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:19.647503  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:19.647608  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:19.701577  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:19.706704  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:20.148282  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:20.148431  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:20.200191  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:20.206165  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:20.575689  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:20.648372  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:20.648409  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:20.700343  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:20.706303  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:21.143200  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:21.143239  326776 retry.go:31] will retry after 5.115419773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:21.147367  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:21.147818  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:21.200116  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:21.205847  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:21.647655  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:21.647696  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:21.701069  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:21.707504  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:22.148670  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:22.148734  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:22.200762  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:22.207005  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:22.649497  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:22.649775  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:22.701475  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:22.706787  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:23.147427  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:23.147651  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:23.201369  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:23.206459  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:23.647302  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:23.647504  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:23.700760  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:23.706827  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:24.147768  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:24.147778  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:24.200818  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:24.206818  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:24.649607  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:24.652117  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:24.700497  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:24.709215  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:25.149308  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:25.149654  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:25.200960  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:25.207718  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:25.647841  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:25.648097  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:25.700491  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:25.706779  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:26.147253  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:26.147692  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:26.201288  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:26.207026  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:26.259098  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:26.647846  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:26.648000  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:26.700227  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:26.706132  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:26.970750  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:26.970789  326776 retry.go:31] will retry after 8.001289699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:27.148227  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:27.148340  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:27.201263  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:27.206430  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:27.648206  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:27.648539  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:27.700680  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:27.706984  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:28.148225  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:28.150035  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:28.201412  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:28.206820  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:28.647800  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:28.647990  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:28.700970  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:28.706995  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:29.147908  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:29.148030  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:29.201306  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:29.206062  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:29.647543  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:29.647656  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:29.700289  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:29.706706  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:30.147035  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:30.147564  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:30.200804  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:30.206666  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:30.648206  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:30.648246  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:30.700722  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:30.707109  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:31.148116  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:31.148222  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:31.200431  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:31.206646  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:31.647444  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:31.648138  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:31.700541  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:31.707269  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:32.148070  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:32.148106  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:32.200379  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:32.206884  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:32.647561  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:32.647613  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:32.701238  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:32.706476  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:33.148057  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:33.148131  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:33.201093  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:33.206126  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:33.689158  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:33.689259  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:33.699750  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:33.706689  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:34.147508  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:34.147566  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:34.247741  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:34.248062  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:34.647803  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:34.648011  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:34.700693  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:34.706252  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:34.972439  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:35.148514  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:35.148843  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:35.200209  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:35.206644  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:35.649907  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:35.651710  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:35.704376  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:35.710341  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:35.805688  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:35.805786  326776 retry.go:31] will retry after 18.678082557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:36.148179  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:36.148180  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:36.200795  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:36.207091  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:36.649247  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:36.649486  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:36.700892  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:36.706513  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:37.148703  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:37.148908  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:37.201104  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:37.206504  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:37.647423  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:37.647530  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:37.700901  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:37.707293  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:38.148120  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.148199  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.200441  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:38.206973  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:38.648168  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.648365  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.700710  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:38.706934  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.147439  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.148048  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.201731  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:39.206954  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.647474  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.647518  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.700589  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:39.706842  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.147597  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.147805  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.201483  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:40.207134  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.648145  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.648478  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.700145  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:40.706472  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:41.147869  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.148259  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.200762  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:41.206910  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:41.647339  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.647766  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.700854  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:41.707261  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:42.147958  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.148005  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.200785  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.206963  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:42.647671  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.647765  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.700782  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.706930  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:43.147833  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.148313  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.200207  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.206590  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:43.647832  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.647929  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.701125  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.705967  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:44.147626  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.147808  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.200612  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.206571  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:44.647714  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.647874  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.700771  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.706459  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:45.148626  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.148740  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.201269  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.206998  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:45.648198  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.648430  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.700505  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.706655  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.147642  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.147645  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.200672  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.206593  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.647807  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.647849  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.701349  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.706760  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:47.147870  326776 kapi.go:107] duration metric: took 45.003509589s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:33:47.148041  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.245668  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.246558  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:47.647867  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.700774  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.707173  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.148676  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.201026  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.206944  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.647901  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.727073  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.727343  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.148563  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:49.200436  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.206372  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.648266  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:49.710231  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.710982  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:50.147266  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.200742  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.206967  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:50.648437  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.700983  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.707347  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.147858  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.201116  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.206667  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.647144  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.702066  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.706041  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:52.148109  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.200963  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.207508  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:52.647556  326776 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.700560  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.706825  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:53.149235  326776 kapi.go:107] duration metric: took 51.005594228s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:33:53.200997  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.207374  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:53.701640  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.706536  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:54.201103  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.205921  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:54.484177  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:54.701663  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.707079  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.200178  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:33:55.202487  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:55.202520  326776 retry.go:31] will retry after 21.963178346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:55.206889  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.700374  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:55.706491  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:56.201092  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.205520  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:56.701017  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.705817  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:57.200559  326776 kapi.go:107] duration metric: took 48.503627728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:33:57.202487  326776 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-582494 cluster.
	I1025 09:33:57.203982  326776 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:33:57.205425  326776 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:33:57.206388  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:57.706972  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.207442  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.707540  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:59.207379  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:59.706573  326776 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:00.207793  326776 kapi.go:107] duration metric: took 57.504955185s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:34:17.168030  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:34:17.732021  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:17.732050  326776 retry.go:31] will retry after 29.095006215s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:46.828844  326776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:34:47.397373  326776 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:34:47.397522  326776 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:34:47.402000  326776 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, registry-creds, cloud-spanner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 09:34:47.403393  326776 addons.go:514] duration metric: took 1m47.08141246s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin registry-creds cloud-spanner nvidia-device-plugin metrics-server yakd default-storageclass storage-provisioner storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 09:34:47.403458  326776 start.go:246] waiting for cluster config update ...
	I1025 09:34:47.403481  326776 start.go:255] writing updated cluster config ...
	I1025 09:34:47.403801  326776 ssh_runner.go:195] Run: rm -f paused
	I1025 09:34:47.408274  326776 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:47.412416  326776 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x52sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.417036  326776 pod_ready.go:94] pod "coredns-66bc5c9577-x52sm" is "Ready"
	I1025 09:34:47.417067  326776 pod_ready.go:86] duration metric: took 4.625059ms for pod "coredns-66bc5c9577-x52sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.419174  326776 pod_ready.go:83] waiting for pod "etcd-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.423232  326776 pod_ready.go:94] pod "etcd-addons-582494" is "Ready"
	I1025 09:34:47.423254  326776 pod_ready.go:86] duration metric: took 4.057225ms for pod "etcd-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.425289  326776 pod_ready.go:83] waiting for pod "kube-apiserver-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.429061  326776 pod_ready.go:94] pod "kube-apiserver-addons-582494" is "Ready"
	I1025 09:34:47.429083  326776 pod_ready.go:86] duration metric: took 3.772431ms for pod "kube-apiserver-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.430941  326776 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:47.813640  326776 pod_ready.go:94] pod "kube-controller-manager-addons-582494" is "Ready"
	I1025 09:34:47.813671  326776 pod_ready.go:86] duration metric: took 382.708184ms for pod "kube-controller-manager-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:48.013188  326776 pod_ready.go:83] waiting for pod "kube-proxy-fmsgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:48.413228  326776 pod_ready.go:94] pod "kube-proxy-fmsgh" is "Ready"
	I1025 09:34:48.413257  326776 pod_ready.go:86] duration metric: took 400.043463ms for pod "kube-proxy-fmsgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:48.612735  326776 pod_ready.go:83] waiting for pod "kube-scheduler-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:49.012835  326776 pod_ready.go:94] pod "kube-scheduler-addons-582494" is "Ready"
	I1025 09:34:49.012862  326776 pod_ready.go:86] duration metric: took 400.092842ms for pod "kube-scheduler-addons-582494" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:49.012873  326776 pod_ready.go:40] duration metric: took 1.604563144s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:49.061617  326776 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:34:49.063460  326776 out.go:179] * Done! kubectl is now configured to use "addons-582494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:34:50 addons-582494 crio[771]: time="2025-10-25T09:34:50.008043822Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.018424624Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f898901f-45fb-404f-a5fc-02b6686fdf5b name=/runtime.v1.ImageService/PullImage
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.019068683Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cf2b4519-182a-456f-b4bd-dc1660500698 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.020641659Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=31c4933c-b653-41d3-8933-32663867e62c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.025910966Z" level=info msg="Creating container: default/busybox/busybox" id=45e93bac-184e-4008-8cc1-77a0700a886d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.026054936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.031773063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.032451142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.061305609Z" level=info msg="Created container 1ee1af5d7cbcc0565d3100b2cc3f584b4302420dea447c7a4c9e0fdb3ed39c01: default/busybox/busybox" id=45e93bac-184e-4008-8cc1-77a0700a886d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.062012718Z" level=info msg="Starting container: 1ee1af5d7cbcc0565d3100b2cc3f584b4302420dea447c7a4c9e0fdb3ed39c01" id=630abb8c-5c8e-463f-ba8c-80334b945f24 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:34:52 addons-582494 crio[771]: time="2025-10-25T09:34:52.064212519Z" level=info msg="Started container" PID=6672 containerID=1ee1af5d7cbcc0565d3100b2cc3f584b4302420dea447c7a4c9e0fdb3ed39c01 description=default/busybox/busybox id=630abb8c-5c8e-463f-ba8c-80334b945f24 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cac9993f4feac857db528abd2beaac9143fcb918ab677c83d687b4e3268bd8f9
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.654907852Z" level=info msg="Removing container: c44859e4f73a8feb1eb1050a21fceebbed182f4ac1739f83777d6353aa766b93" id=ca20b2d2-142b-4373-b3b6-b0d01664373d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.664807784Z" level=info msg="Removed container c44859e4f73a8feb1eb1050a21fceebbed182f4ac1739f83777d6353aa766b93: gcp-auth/gcp-auth-certs-patch-2njbb/patch" id=ca20b2d2-142b-4373-b3b6-b0d01664373d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.666503095Z" level=info msg="Removing container: 80a7ac0f7bca51256a4326d3f1c1412cf6289782be41f5aaa5ff11bee32ad640" id=8c2d29f3-500b-4e1b-8e89-08b5cef08753 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.673487375Z" level=info msg="Removed container 80a7ac0f7bca51256a4326d3f1c1412cf6289782be41f5aaa5ff11bee32ad640: gcp-auth/gcp-auth-certs-create-gvv74/create" id=8c2d29f3-500b-4e1b-8e89-08b5cef08753 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.67630399Z" level=info msg="Stopping pod sandbox: c1c1c6d3cc65d16b7f2ddf2cd27d691e979219bfc47ba246511cc6a63423da0f" id=55ea3e5c-d654-44c7-9787-1322af98490d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.676404345Z" level=info msg="Stopped pod sandbox (already stopped): c1c1c6d3cc65d16b7f2ddf2cd27d691e979219bfc47ba246511cc6a63423da0f" id=55ea3e5c-d654-44c7-9787-1322af98490d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.676925165Z" level=info msg="Removing pod sandbox: c1c1c6d3cc65d16b7f2ddf2cd27d691e979219bfc47ba246511cc6a63423da0f" id=5eff6b5e-e292-4e45-963e-ca945dff48e8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.680101238Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.680165439Z" level=info msg="Removed pod sandbox: c1c1c6d3cc65d16b7f2ddf2cd27d691e979219bfc47ba246511cc6a63423da0f" id=5eff6b5e-e292-4e45-963e-ca945dff48e8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.680707092Z" level=info msg="Stopping pod sandbox: 9e41b0f100b00fb12aeda70b41b0ce87416a642ec1a70f666e93693e432fa4a5" id=253e074a-47cb-4a8f-94a3-14986fda1590 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.680750504Z" level=info msg="Stopped pod sandbox (already stopped): 9e41b0f100b00fb12aeda70b41b0ce87416a642ec1a70f666e93693e432fa4a5" id=253e074a-47cb-4a8f-94a3-14986fda1590 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.681143249Z" level=info msg="Removing pod sandbox: 9e41b0f100b00fb12aeda70b41b0ce87416a642ec1a70f666e93693e432fa4a5" id=5e729482-e78c-4ca1-a908-0d35cc43f37f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.68406296Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:34:54 addons-582494 crio[771]: time="2025-10-25T09:34:54.684118253Z" level=info msg="Removed pod sandbox: 9e41b0f100b00fb12aeda70b41b0ce87416a642ec1a70f666e93693e432fa4a5" id=5e729482-e78c-4ca1-a908-0d35cc43f37f name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	1ee1af5d7cbcc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   cac9993f4feac       busybox                                     default
	a590641d19544       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          About a minute ago   Running             csi-snapshotter                          0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	14680175d4318       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	8b3ea24513b9d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	1c33d20dccf9d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	5f498c8f7524b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 About a minute ago   Running             gcp-auth                                 0                   b54023d98bc99       gcp-auth-78565c9fb4-fbgsp                   gcp-auth
	19e3e274001e7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	30c87a2348b53       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             About a minute ago   Running             controller                               0                   910abc08088b2       ingress-nginx-controller-675c5ddd98-99ltz   ingress-nginx
	c77d73d1bd9c3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            About a minute ago   Running             gadget                                   0                   49fcaf6cd8f16       gadget-mhs6l                                gadget
	fd4a5a7d8c5f4       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              About a minute ago   Running             registry-proxy                           0                   45a541774711d       registry-proxy-vjtwb                        kube-system
	aaacd09fa43cb       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago   Running             amd-gpu-device-plugin                    0                   7ebd6ddef915d       amd-gpu-device-plugin-j28pq                 kube-system
	5f1abc3fa71fd       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   3c84383feb961       nvidia-device-plugin-daemonset-wln7g        kube-system
	ba8a2ae228e5a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   e8787f64bb07c       snapshot-controller-7d9fbc56b8-kww9w        kube-system
	b2e5cedb9fdb4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   913c85549ca1b       csi-hostpathplugin-s5v6k                    kube-system
	53959ea9bc3e2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   39f75f3f82bb0       csi-hostpath-attacher-0                     kube-system
	214643b0e233a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   98773678657d3       csi-hostpath-resizer-0                      kube-system
	1b757b91d048a       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             About a minute ago   Exited              patch                                    2                   d1aa5e992c5bc       ingress-nginx-admission-patch-l8h7x         ingress-nginx
	1bf763269e9c6       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   f22742830a23f       registry-6b586f9694-jftz9                   kube-system
	e59c39fff2eab       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   fb4ec008b0c49       kube-ingress-dns-minikube                   kube-system
	da4dbc32e1215       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   0a668596eb605       yakd-dashboard-5ff678cb9-bjt42              yakd-dashboard
	d884ae3f8ba28       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   4eab3c3d495c0       cloud-spanner-emulator-86bd5cbb97-7f4kh     default
	1712ecb1d7d91       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   9b6ee7a455638       ingress-nginx-admission-create-jk78g        ingress-nginx
	42f1c21ebcd71       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   c4d9203de5ce2       snapshot-controller-7d9fbc56b8-b7qwq        kube-system
	08e112895de76       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   bc8fbd25e7ea4       local-path-provisioner-648f6765c9-sdhd9     local-path-storage
	eb8b6e448a834       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   ff9d7c2e6014b       metrics-server-85b7d694d7-wnq6w             kube-system
	d0697f1703581       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   b4e31d2cb7d28       coredns-66bc5c9577-x52sm                    kube-system
	b927c0ae13deb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   fa266bb95ca83       storage-provisioner                         kube-system
	29d8b7fdf8c84       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   d47b121ec8a23       kube-proxy-fmsgh                            kube-system
	d0a9822bc2dd8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   f90cf1faeec9f       kindnet-dkqbp                               kube-system
	19a44bd56404c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   336012917a173       etcd-addons-582494                          kube-system
	9b9bb34ede66b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   22c6717fe0966       kube-apiserver-addons-582494                kube-system
	d0ccb48b50e7a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   a3e1459f47202       kube-controller-manager-addons-582494       kube-system
	62e249a5b3adf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   ccb17b5db09e5       kube-scheduler-addons-582494                kube-system
	
	
	==> coredns [d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242] <==
	[INFO] 10.244.0.15:33575 - 34250 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002779111s
	[INFO] 10.244.0.15:60214 - 30291 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000062692s
	[INFO] 10.244.0.15:60214 - 30689 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000106309s
	[INFO] 10.244.0.15:39535 - 43451 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000071651s
	[INFO] 10.244.0.15:39535 - 43221 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000099424s
	[INFO] 10.244.0.15:44468 - 46062 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000067038s
	[INFO] 10.244.0.15:44468 - 46325 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000120205s
	[INFO] 10.244.0.15:59331 - 43331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124729s
	[INFO] 10.244.0.15:59331 - 43598 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188193s
	[INFO] 10.244.0.22:54495 - 34858 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002397s
	[INFO] 10.244.0.22:43900 - 59417 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000348064s
	[INFO] 10.244.0.22:39600 - 55901 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000172424s
	[INFO] 10.244.0.22:58212 - 44524 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000203465s
	[INFO] 10.244.0.22:58060 - 18147 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000991s
	[INFO] 10.244.0.22:46480 - 20720 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126809s
	[INFO] 10.244.0.22:34378 - 59396 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004172056s
	[INFO] 10.244.0.22:53604 - 19045 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004211652s
	[INFO] 10.244.0.22:39892 - 755 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005702351s
	[INFO] 10.244.0.22:60817 - 46199 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00582152s
	[INFO] 10.244.0.22:42767 - 10907 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004663784s
	[INFO] 10.244.0.22:44258 - 39572 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005061424s
	[INFO] 10.244.0.22:33819 - 51161 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004034718s
	[INFO] 10.244.0.22:38876 - 33170 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004378859s
	[INFO] 10.244.0.22:36523 - 52445 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000930965s
	[INFO] 10.244.0.22:40205 - 21697 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001233336s
	
	
	==> describe nodes <==
	Name:               addons-582494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-582494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=addons-582494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_32_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-582494
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-582494"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:32:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-582494
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:34:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:34:57 +0000   Sat, 25 Oct 2025 09:32:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:34:57 +0000   Sat, 25 Oct 2025 09:32:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:34:57 +0000   Sat, 25 Oct 2025 09:32:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:34:57 +0000   Sat, 25 Oct 2025 09:33:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-582494
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                210d01d7-a029-4efc-9521-d1eac2e4328a
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-7f4kh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gadget                      gadget-mhs6l                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-fbgsp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-99ltz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         117s
	  kube-system                 amd-gpu-device-plugin-j28pq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 coredns-66bc5c9577-x52sm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     119s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-s5v6k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 etcd-addons-582494                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-dkqbp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-addons-582494                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-addons-582494        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-fmsgh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-addons-582494                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 metrics-server-85b7d694d7-wnq6w              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-wln7g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 registry-6b586f9694-jftz9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-creds-764b6fb674-n9dsg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-proxy-vjtwb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 snapshot-controller-7d9fbc56b8-b7qwq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-kww9w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-sdhd9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bjt42               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 118s  kube-proxy       
	  Normal  Starting                 2m5s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s  kubelet          Node addons-582494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s  kubelet          Node addons-582494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s  kubelet          Node addons-582494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m    node-controller  Node addons-582494 event: Registered Node addons-582494 in Controller
	  Normal  NodeReady                108s  kubelet          Node addons-582494 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 63 e3 50 6d 87 08 06
	[Oct25 08:56] IPv4: martian source 10.244.0.1 from 10.244.0.33, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a c9 b9 b8 22 26 08 06
	[ +22.227775] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a b1 be 9e e6 e6 08 06
	[Oct25 08:57] IPv4: martian source 10.244.0.1 from 10.244.0.35, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5e 49 04 8a e9 97 08 06
	[ +24.733016] IPv4: martian source 10.244.0.1 from 10.244.0.37, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 18 9e 87 3b 98 08 06
	[Oct25 08:59] IPv4: martian source 10.244.0.1 from 10.244.0.45, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 b1 40 d1 c4 53 08 06
	[  +0.001667] IPv4: martian source 10.244.0.1 from 10.244.0.46, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 01 10 9f 8a b0 08 06
	[Oct25 09:01] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 3a a9 0e 90 6b 08 06
	[ +24.059892] IPv4: martian source 10.244.0.1 from 10.244.0.48, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a cb 97 2f a2 36 08 06
	[Oct25 09:02] IPv4: martian source 10.244.0.1 from 10.244.0.49, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 3c 1b 63 a6 4c 08 06
	[ +21.349543] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 b5 ea 39 22 e5 08 06
	[Oct25 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.52, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 16 b3 d7 05 74 b5 08 06
	[ +20.912051] IPv4: martian source 10.244.0.1 from 10.244.0.53, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e b0 a7 e4 38 e4 08 06
	
	
	==> etcd [19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f] <==
	{"level":"warn","ts":"2025-10-25T09:32:51.673065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.679681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.687079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.701801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.710610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.718533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.726077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.733236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.740165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.747095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.753840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.760771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.773958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.780633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.787198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.793571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.812375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.819706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.826626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:32:51.876608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:03.189782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:03.196604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:16.965600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:16.972311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:16.988139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [5f498c8f7524b00b664dd08f3a1f0f60a2b8ef24a467414ad638ba00176c1305] <==
	2025/10/25 09:33:56 GCP Auth Webhook started!
	2025/10/25 09:34:49 Ready to marshal response ...
	2025/10/25 09:34:49 Ready to write response ...
	2025/10/25 09:34:49 Ready to marshal response ...
	2025/10/25 09:34:49 Ready to write response ...
	2025/10/25 09:34:49 Ready to marshal response ...
	2025/10/25 09:34:49 Ready to write response ...
	
	
	==> kernel <==
	 09:34:59 up  1:17,  0 user,  load average: 1.00, 28.83, 59.66
	Linux addons-582494 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b] <==
	I1025 09:33:03.030026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:33:03.030067       1 metrics.go:72] Registering metrics
	I1025 09:33:03.030161       1 controller.go:711] "Syncing nftables rules"
	I1025 09:33:11.313955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:33:11.314064       1 main.go:301] handling current node
	I1025 09:33:21.314733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:33:21.314861       1 main.go:301] handling current node
	I1025 09:33:31.313898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:33:31.313944       1 main.go:301] handling current node
	I1025 09:33:41.314211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:33:41.314258       1 main.go:301] handling current node
	I1025 09:33:51.313940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:33:51.313974       1 main.go:301] handling current node
	I1025 09:34:01.314216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:01.314267       1 main.go:301] handling current node
	I1025 09:34:11.315703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:11.315742       1 main.go:301] handling current node
	I1025 09:34:21.314944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:21.314981       1 main.go:301] handling current node
	I1025 09:34:31.316598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:31.316635       1 main.go:301] handling current node
	I1025 09:34:41.316923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:41.316963       1 main.go:301] handling current node
	I1025 09:34:51.314689       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:51.314722       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:33:15.284542       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.189.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.189.108:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.189.108:443: connect: connection refused" logger="UnhandledError"
	W1025 09:33:16.285893       1 handler_proxy.go:99] no RequestInfo found in the context
	W1025 09:33:16.285948       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:33:16.285989       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:33:16.285991       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:33:16.286001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 09:33:16.287148       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:33:16.965483       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:16.972293       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:16.988050       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:16.994838       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:33:20.295508       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:33:20.295660       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:33:20.295847       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.189.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.189.108:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 09:33:20.304632       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:34:57.825771       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43066: use of closed network connection
	E1025 09:34:57.980276       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43084: use of closed network connection
	
	
	==> kube-controller-manager [d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176] <==
	I1025 09:32:59.275580       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-582494"
	I1025 09:32:59.275652       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:32:59.275683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:32:59.275912       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:32:59.276698       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:32:59.276735       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:32:59.276745       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:32:59.276770       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:32:59.276807       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:32:59.276811       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:32:59.276884       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:32:59.276977       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:32:59.276992       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:32:59.277138       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:32:59.277657       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:32:59.278912       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:32:59.284946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:32:59.299259       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:33:01.745640       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1025 09:33:14.277949       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1025 09:33:29.290742       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:33:29.290799       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:33:29.308948       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:33:29.391055       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:29.409458       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8] <==
	I1025 09:33:00.741126       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:33:01.007897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:33:01.112335       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:33:01.121746       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:33:01.121889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:33:01.423226       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:33:01.423308       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:33:01.500491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:33:01.514915       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:33:01.514969       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:01.600015       1 config.go:200] "Starting service config controller"
	I1025 09:33:01.600045       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:33:01.600075       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:33:01.600081       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:33:01.600138       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:33:01.600147       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:33:01.601048       1 config.go:309] "Starting node config controller"
	I1025 09:33:01.601058       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:33:01.601066       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:33:01.702578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:33:01.704811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:33:01.704838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf] <==
	E1025 09:32:52.294115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:32:52.294125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:32:52.295690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:32:52.295950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:32:52.296068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:32:52.296145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:32:52.296216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:32:52.296250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:32:52.296390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:32:52.296450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:32:52.296491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:32:52.296489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:32:53.157364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:32:53.161529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:32:53.194870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:32:53.274499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:32:53.298034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:32:53.314123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:32:53.501431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:32:53.503481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:32:53.511765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:32:53.563977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:32:53.583229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:32:53.721138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:32:55.783010       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:33:43 addons-582494 kubelet[1295]: E1025 09:33:43.764109    1295 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 25 09:33:43 addons-582494 kubelet[1295]: E1025 09:33:43.764244    1295 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe140945-faea-411c-88be-84e6d8ba91bb-gcr-creds podName:fe140945-faea-411c-88be-84e6d8ba91bb nodeName:}" failed. No retries permitted until 2025-10-25 09:34:15.764217992 +0000 UTC m=+81.200869922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/fe140945-faea-411c-88be-84e6d8ba91bb-gcr-creds") pod "registry-creds-764b6fb674-n9dsg" (UID: "fe140945-faea-411c-88be-84e6d8ba91bb") : secret "registry-creds-gcr" not found
	Oct 25 09:33:43 addons-582494 kubelet[1295]: I1025 09:33:43.928700    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j28pq" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:33:43 addons-582494 kubelet[1295]: I1025 09:33:43.928927    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wln7g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:33:43 addons-582494 kubelet[1295]: I1025 09:33:43.939752    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-j28pq" podStartSLOduration=1.767078082 podStartE2EDuration="32.939724141s" podCreationTimestamp="2025-10-25 09:33:11 +0000 UTC" firstStartedPulling="2025-10-25 09:33:12.323982954 +0000 UTC m=+17.760634866" lastFinishedPulling="2025-10-25 09:33:43.496629011 +0000 UTC m=+48.933280925" observedRunningTime="2025-10-25 09:33:43.939271326 +0000 UTC m=+49.375923291" watchObservedRunningTime="2025-10-25 09:33:43.939724141 +0000 UTC m=+49.376376073"
	Oct 25 09:33:44 addons-582494 kubelet[1295]: I1025 09:33:44.933641    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j28pq" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:33:46 addons-582494 kubelet[1295]: I1025 09:33:46.942889    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vjtwb" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:33:47 addons-582494 kubelet[1295]: I1025 09:33:47.947191    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vjtwb" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:33:49 addons-582494 kubelet[1295]: I1025 09:33:49.983678    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-mhs6l" podStartSLOduration=18.143696277 podStartE2EDuration="48.983652566s" podCreationTimestamp="2025-10-25 09:33:01 +0000 UTC" firstStartedPulling="2025-10-25 09:33:18.182070581 +0000 UTC m=+23.618722497" lastFinishedPulling="2025-10-25 09:33:49.022026869 +0000 UTC m=+54.458678786" observedRunningTime="2025-10-25 09:33:49.982661452 +0000 UTC m=+55.419313430" watchObservedRunningTime="2025-10-25 09:33:49.983652566 +0000 UTC m=+55.420304498"
	Oct 25 09:33:49 addons-582494 kubelet[1295]: I1025 09:33:49.984744    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-vjtwb" podStartSLOduration=5.392592196 podStartE2EDuration="38.984724406s" podCreationTimestamp="2025-10-25 09:33:11 +0000 UTC" firstStartedPulling="2025-10-25 09:33:12.340486491 +0000 UTC m=+17.777138420" lastFinishedPulling="2025-10-25 09:33:45.932618706 +0000 UTC m=+51.369270630" observedRunningTime="2025-10-25 09:33:46.966906138 +0000 UTC m=+52.403558074" watchObservedRunningTime="2025-10-25 09:33:49.984724406 +0000 UTC m=+55.421376337"
	Oct 25 09:33:52 addons-582494 kubelet[1295]: I1025 09:33:52.984081    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-99ltz" podStartSLOduration=18.449567638 podStartE2EDuration="50.984052964s" podCreationTimestamp="2025-10-25 09:33:02 +0000 UTC" firstStartedPulling="2025-10-25 09:33:20.246275468 +0000 UTC m=+25.682927394" lastFinishedPulling="2025-10-25 09:33:52.780760795 +0000 UTC m=+58.217412720" observedRunningTime="2025-10-25 09:33:52.983953906 +0000 UTC m=+58.420605841" watchObservedRunningTime="2025-10-25 09:33:52.984052964 +0000 UTC m=+58.420704907"
	Oct 25 09:33:57 addons-582494 kubelet[1295]: I1025 09:33:57.008241    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-fbgsp" podStartSLOduration=36.914734384 podStartE2EDuration="49.008216034s" podCreationTimestamp="2025-10-25 09:33:08 +0000 UTC" firstStartedPulling="2025-10-25 09:33:44.034619189 +0000 UTC m=+49.471271106" lastFinishedPulling="2025-10-25 09:33:56.12810084 +0000 UTC m=+61.564752756" observedRunningTime="2025-10-25 09:33:57.006894857 +0000 UTC m=+62.443546811" watchObservedRunningTime="2025-10-25 09:33:57.008216034 +0000 UTC m=+62.444867966"
	Oct 25 09:33:58 addons-582494 kubelet[1295]: I1025 09:33:58.714155    1295 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 25 09:33:58 addons-582494 kubelet[1295]: I1025 09:33:58.714195    1295 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 25 09:34:00 addons-582494 kubelet[1295]: I1025 09:34:00.038557    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-s5v6k" podStartSLOduration=1.9149300710000001 podStartE2EDuration="49.038531341s" podCreationTimestamp="2025-10-25 09:33:11 +0000 UTC" firstStartedPulling="2025-10-25 09:33:12.311705299 +0000 UTC m=+17.748357223" lastFinishedPulling="2025-10-25 09:33:59.435306567 +0000 UTC m=+64.871958493" observedRunningTime="2025-10-25 09:34:00.036676462 +0000 UTC m=+65.473328421" watchObservedRunningTime="2025-10-25 09:34:00.038531341 +0000 UTC m=+65.475183273"
	Oct 25 09:34:06 addons-582494 kubelet[1295]: I1025 09:34:06.659573    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a047121e-bf91-4dae-b3cf-a53e42996b9a" path="/var/lib/kubelet/pods/a047121e-bf91-4dae-b3cf-a53e42996b9a/volumes"
	Oct 25 09:34:08 addons-582494 kubelet[1295]: I1025 09:34:08.659249    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6081468f-bd5f-4f5d-aaec-34df7990b997" path="/var/lib/kubelet/pods/6081468f-bd5f-4f5d-aaec-34df7990b997/volumes"
	Oct 25 09:34:15 addons-582494 kubelet[1295]: E1025 09:34:15.833826    1295 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 25 09:34:15 addons-582494 kubelet[1295]: E1025 09:34:15.833966    1295 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe140945-faea-411c-88be-84e6d8ba91bb-gcr-creds podName:fe140945-faea-411c-88be-84e6d8ba91bb nodeName:}" failed. No retries permitted until 2025-10-25 09:35:19.833938632 +0000 UTC m=+145.270590558 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/fe140945-faea-411c-88be-84e6d8ba91bb-gcr-creds") pod "registry-creds-764b6fb674-n9dsg" (UID: "fe140945-faea-411c-88be-84e6d8ba91bb") : secret "registry-creds-gcr" not found
	Oct 25 09:34:48 addons-582494 kubelet[1295]: I1025 09:34:48.660307    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wln7g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:34:49 addons-582494 kubelet[1295]: I1025 09:34:49.810715    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7e2bff66-1ded-4b19-8d85-5456f9db38f3-gcp-creds\") pod \"busybox\" (UID: \"7e2bff66-1ded-4b19-8d85-5456f9db38f3\") " pod="default/busybox"
	Oct 25 09:34:49 addons-582494 kubelet[1295]: I1025 09:34:49.810770    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6hh\" (UniqueName: \"kubernetes.io/projected/7e2bff66-1ded-4b19-8d85-5456f9db38f3-kube-api-access-sv6hh\") pod \"busybox\" (UID: \"7e2bff66-1ded-4b19-8d85-5456f9db38f3\") " pod="default/busybox"
	Oct 25 09:34:54 addons-582494 kubelet[1295]: I1025 09:34:54.651587    1295 scope.go:117] "RemoveContainer" containerID="c44859e4f73a8feb1eb1050a21fceebbed182f4ac1739f83777d6353aa766b93"
	Oct 25 09:34:54 addons-582494 kubelet[1295]: I1025 09:34:54.665122    1295 scope.go:117] "RemoveContainer" containerID="80a7ac0f7bca51256a4326d3f1c1412cf6289782be41f5aaa5ff11bee32ad640"
	Oct 25 09:34:57 addons-582494 kubelet[1295]: E1025 09:34:57.980203    1295 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36680->127.0.0.1:40381: write tcp 127.0.0.1:36680->127.0.0.1:40381: write: broken pipe
	
	
	==> storage-provisioner [b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a] <==
	W1025 09:34:34.781279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:36.785365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:36.790984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:38.794900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:38.799226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:40.802248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:40.807247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:42.811541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:42.818365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:44.821903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:44.826625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:46.829827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:46.838543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:48.842465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:48.846404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:50.849587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:50.853899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:52.857863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:52.863541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:54.866763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:54.871363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:56.874841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:56.880364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:58.883878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:58.888596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-582494 -n addons-582494
helpers_test.go:269: (dbg) Run:  kubectl --context addons-582494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x registry-creds-764b6fb674-n9dsg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-582494 describe pod ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x registry-creds-764b6fb674-n9dsg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-582494 describe pod ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x registry-creds-764b6fb674-n9dsg: exit status 1 (64.819985ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jk78g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l8h7x" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-n9dsg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-582494 describe pod ingress-nginx-admission-create-jk78g ingress-nginx-admission-patch-l8h7x registry-creds-764b6fb674-n9dsg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable headlamp --alsologtostderr -v=1: exit status 11 (263.895175ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:00.795956  336252 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:00.796729  336252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:00.796746  336252 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:00.796751  336252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:00.797014  336252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:00.797378  336252 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:00.797798  336252 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:00.797819  336252 addons.go:606] checking whether the cluster is paused
	I1025 09:35:00.797927  336252 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:00.797946  336252 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:00.798365  336252 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:00.819414  336252 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:00.819473  336252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:00.838506  336252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:00.940727  336252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:00.940858  336252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:00.973289  336252 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:00.973336  336252 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:00.973343  336252 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:00.973349  336252 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:00.973353  336252 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:00.973360  336252 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:00.973363  336252 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:00.973366  336252 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:00.973369  336252 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:00.973384  336252 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:00.973392  336252 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:00.973397  336252 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:00.973404  336252 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:00.973409  336252 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:00.973416  336252 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:00.973426  336252 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:00.973433  336252 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:00.973439  336252 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:00.973443  336252 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:00.973446  336252 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:00.973451  336252 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:00.973453  336252 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:00.973456  336252 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:00.973459  336252 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:00.973461  336252 cri.go:89] found id: ""
	I1025 09:35:00.973519  336252 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:00.989011  336252 out.go:203] 
	W1025 09:35:00.990239  336252 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:00.990260  336252 out.go:285] * 
	* 
	W1025 09:35:00.993427  336252 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:00.995061  336252 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.75s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-7f4kh" [70c70fae-1d55-44d9-9072-277b733886c9] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003297528s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (277.399578ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:14.959968  338241 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:14.962110  338241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:14.962140  338241 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:14.962150  338241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:14.962624  338241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:14.963146  338241 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:14.963866  338241 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:14.963895  338241 addons.go:606] checking whether the cluster is paused
	I1025 09:35:14.964067  338241 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:14.964091  338241 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:14.964985  338241 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:14.987539  338241 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:14.987605  338241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:15.009671  338241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:15.113957  338241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:15.114057  338241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:15.145077  338241 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:15.145110  338241 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:15.145114  338241 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:15.145119  338241 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:15.145121  338241 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:15.145125  338241 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:15.145128  338241 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:15.145130  338241 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:15.145133  338241 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:15.145143  338241 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:15.145148  338241 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:15.145152  338241 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:15.145155  338241 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:15.145159  338241 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:15.145163  338241 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:15.145179  338241 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:15.145188  338241 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:15.145193  338241 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:15.145195  338241 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:15.145198  338241 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:15.145200  338241 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:15.145203  338241 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:15.145205  338241 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:15.145207  338241 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:15.145209  338241 cri.go:89] found id: ""
	I1025 09:35:15.145273  338241 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:15.160892  338241 out.go:203] 
	W1025 09:35:15.162227  338241 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:15.162257  338241 out.go:285] * 
	* 
	W1025 09:35:15.165398  338241 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:15.166948  338241 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-582494 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-582494 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [30510180-8020-4e63-8c25-acd7d7c8cf4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [30510180-8020-4e63-8c25-acd7d7c8cf4f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [30510180-8020-4e63-8c25-acd7d7c8cf4f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003615941s
addons_test.go:967: (dbg) Run:  kubectl --context addons-582494 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 ssh "cat /opt/local-path-provisioner/pvc-122ff3f1-ae75-4f96-94d0-2db3ca74ea0b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-582494 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-582494 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (275.905398ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:10.966848  337111 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:10.967042  337111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:10.967056  337111 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:10.967064  337111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:10.967430  337111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:10.967801  337111 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:10.968346  337111 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:10.968371  337111 addons.go:606] checking whether the cluster is paused
	I1025 09:35:10.968501  337111 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:10.968519  337111 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:10.969060  337111 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:10.990615  337111 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:10.990851  337111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:11.012515  337111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:11.115909  337111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:11.115992  337111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:11.148295  337111 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:11.148348  337111 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:11.148355  337111 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:11.148360  337111 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:11.148364  337111 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:11.148369  337111 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:11.148373  337111 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:11.148378  337111 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:11.148382  337111 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:11.148391  337111 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:11.148396  337111 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:11.148400  337111 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:11.148404  337111 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:11.148407  337111 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:11.148411  337111 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:11.148418  337111 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:11.148426  337111 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:11.148431  337111 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:11.148435  337111 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:11.148439  337111 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:11.148450  337111 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:11.148457  337111 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:11.148461  337111 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:11.148465  337111 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:11.148471  337111 cri.go:89] found id: ""
	I1025 09:35:11.148527  337111 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:11.166542  337111 out.go:203] 
	W1025 09:35:11.167973  337111 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:11.168003  337111 out.go:285] * 
	* 
	W1025 09:35:11.171071  337111 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:11.173275  337111 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-wln7g" [b1c5c3bc-84d4-426d-988f-f3fdae1b4501] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003773427s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (287.577404ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:04.334702  336471 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:04.335765  336471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:04.335783  336471 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:04.335791  336471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:04.336025  336471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:04.336311  336471 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:04.336703  336471 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:04.336727  336471 addons.go:606] checking whether the cluster is paused
	I1025 09:35:04.336812  336471 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:04.336825  336471 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:04.337241  336471 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:04.356728  336471 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:04.356792  336471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:04.378235  336471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:04.481013  336471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:04.481106  336471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:04.514260  336471 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:04.514280  336471 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:04.514284  336471 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:04.514287  336471 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:04.514290  336471 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:04.514294  336471 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:04.514297  336471 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:04.514299  336471 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:04.514301  336471 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:04.514310  336471 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:04.514312  336471 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:04.514336  336471 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:04.514344  336471 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:04.514347  336471 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:04.514352  336471 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:04.514375  336471 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:04.514384  336471 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:04.514389  336471 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:04.514392  336471 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:04.514395  336471 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:04.514397  336471 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:04.514399  336471 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:04.514401  336471 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:04.514404  336471 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:04.514406  336471 cri.go:89] found id: ""
	I1025 09:35:04.514445  336471 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:04.531154  336471 out.go:203] 
	W1025 09:35:04.532947  336471 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:04.532983  336471 out.go:285] * 
	* 
	W1025 09:35:04.536042  336471 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:04.537551  336471 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bjt42" [62475ca3-6db5-4286-9323-2143e807f31c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003684728s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable yakd --alsologtostderr -v=1: exit status 11 (270.981909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:09.610968  336791 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:09.612005  336791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:09.612022  336791 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:09.612028  336791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:09.612265  336791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:09.612647  336791 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:09.613189  336791 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:09.613211  336791 addons.go:606] checking whether the cluster is paused
	I1025 09:35:09.613359  336791 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:09.613378  336791 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:09.613843  336791 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:09.634172  336791 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:09.634228  336791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:09.655863  336791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:09.759517  336791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:09.759606  336791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:09.792268  336791 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:09.792299  336791 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:09.792303  336791 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:09.792308  336791 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:09.792311  336791 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:09.792338  336791 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:09.792343  336791 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:09.792347  336791 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:09.792351  336791 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:09.792369  336791 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:09.792373  336791 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:09.792376  336791 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:09.792378  336791 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:09.792381  336791 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:09.792383  336791 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:09.792400  336791 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:09.792411  336791 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:09.792415  336791 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:09.792418  336791 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:09.792420  336791 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:09.792423  336791 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:09.792425  336791 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:09.792427  336791 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:09.792430  336791 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:09.792432  336791 cri.go:89] found id: ""
	I1025 09:35:09.792488  336791 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:09.807470  336791 out.go:203] 
	W1025 09:35:09.808878  336791 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:09.808914  336791 out.go:285] * 
	* 
	W1025 09:35:09.812029  336791 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:09.814265  336791 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.28s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-j28pq" [7fd6ba52-5537-4fa5-b6d7-de8391687595] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004190355s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-582494 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-582494 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (287.230313ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:04.329187  336470 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:04.329376  336470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:04.329390  336470 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:04.329397  336470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:04.329711  336470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:35:04.330104  336470 mustload.go:65] Loading cluster: addons-582494
	I1025 09:35:04.330656  336470 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:04.331074  336470 addons.go:606] checking whether the cluster is paused
	I1025 09:35:04.331248  336470 config.go:182] Loaded profile config "addons-582494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:04.331267  336470 host.go:66] Checking if "addons-582494" exists ...
	I1025 09:35:04.331931  336470 cli_runner.go:164] Run: docker container inspect addons-582494 --format={{.State.Status}}
	I1025 09:35:04.353210  336470 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:04.353272  336470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-582494
	I1025 09:35:04.373809  336470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/addons-582494/id_rsa Username:docker}
	I1025 09:35:04.477209  336470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:04.477329  336470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:04.512565  336470 cri.go:89] found id: "a590641d195442d7c8f9417c224a3dfd0909fa17ff5dffbbee56d77203a7bc30"
	I1025 09:35:04.512595  336470 cri.go:89] found id: "14680175d4318c6439bfb260920f00d4fb15de1e1ed56f7cd5a7fdc5088d817c"
	I1025 09:35:04.512599  336470 cri.go:89] found id: "8b3ea24513b9dbeed1495a8ece257396262d09ae53d85f508fd9e1aa15fae881"
	I1025 09:35:04.512602  336470 cri.go:89] found id: "1c33d20dccf9d28551e1afe73e2aa2a5233a190fe5036da5597ab8f98d35e7e1"
	I1025 09:35:04.512605  336470 cri.go:89] found id: "19e3e274001e72f84f8eb6cbd581c789c82111bc575de760d12a318646815997"
	I1025 09:35:04.512609  336470 cri.go:89] found id: "fd4a5a7d8c5f4281000825cc9877d3ea27a21a958879a5db98ee78c72c35f3f4"
	I1025 09:35:04.512612  336470 cri.go:89] found id: "aaacd09fa43cb6730a3a85ccb82d8f4f88d649d37aed22b5d9478f826dd71446"
	I1025 09:35:04.512614  336470 cri.go:89] found id: "5f1abc3fa71fd76f7122379a39679051b1b37e07736695f416558bb08013c9a0"
	I1025 09:35:04.512617  336470 cri.go:89] found id: "ba8a2ae228e5ae5757cffd5f4e4c1b0f6a57d3b7dbac09500e7eb8bad2ffeda6"
	I1025 09:35:04.512657  336470 cri.go:89] found id: "b2e5cedb9fdb4dc8cf750ad182b9d0b075fe38dfe8202975ba1bc91144918969"
	I1025 09:35:04.512660  336470 cri.go:89] found id: "53959ea9bc3e27a71fdfa582a79586fd4fbba5704ce52884b6f578c2371cf734"
	I1025 09:35:04.512669  336470 cri.go:89] found id: "214643b0e233a8f7275185c6308eadd6b3d0e92ec613c31139061014c04338cd"
	I1025 09:35:04.512679  336470 cri.go:89] found id: "1bf763269e9c6cc17fb6ef6bcce3ea5f64cabe52e37b91c75c32967fd2e733f1"
	I1025 09:35:04.512686  336470 cri.go:89] found id: "e59c39fff2eab9a7167d0388e3624c34d57aee469cc349ffd0faa057312a177f"
	I1025 09:35:04.512688  336470 cri.go:89] found id: "42f1c21ebcd7182710da30d4c9fa79ad171f45c43481b3a15df698872a884c69"
	I1025 09:35:04.512697  336470 cri.go:89] found id: "eb8b6e448a83470b682c8b0a60f02504d2943bfc97e4fb2b6411d4a79b1140d5"
	I1025 09:35:04.512703  336470 cri.go:89] found id: "d0697f1703581cace0f227a3658ea08db9401a3bc389367933d7353764a27242"
	I1025 09:35:04.512707  336470 cri.go:89] found id: "b927c0ae13deb3eeb19fde0c0340c1c698692cca9d7ba0a28cdeb1167e99cd0a"
	I1025 09:35:04.512710  336470 cri.go:89] found id: "29d8b7fdf8c84878165f02cb5f082040681528c27127f1406864d3c2332146e8"
	I1025 09:35:04.512712  336470 cri.go:89] found id: "d0a9822bc2dd891d4ea20e8a611c10464e64ace303744b409fe97806e4953c3b"
	I1025 09:35:04.512718  336470 cri.go:89] found id: "19a44bd56404c19362fe3617a93fa60a8364e0f2019540c002e629ef5e155d5f"
	I1025 09:35:04.512721  336470 cri.go:89] found id: "9b9bb34ede66b74552aebc3ff36135f18e45f2f481956f9f42f40048213a058d"
	I1025 09:35:04.512723  336470 cri.go:89] found id: "d0ccb48b50e7aa35ae169d4df1678a14100d85aad31e6a93422011b7ddccd176"
	I1025 09:35:04.512726  336470 cri.go:89] found id: "62e249a5b3adf3fce1b9e30d1842ff783f291aff8d9e9b2c1082ea8806fadacf"
	I1025 09:35:04.512728  336470 cri.go:89] found id: ""
	I1025 09:35:04.512771  336470 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:04.529901  336470 out.go:203] 
	W1025 09:35:04.532141  336470 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:04.532164  336470 out.go:285] * 
	* 
	W1025 09:35:04.535417  336470 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:04.536773  336470 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-582494 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-558764 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-558764 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pq82f" [b619c872-a2bf-4835-99db-4e339351cc4a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-558764 -n functional-558764
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-25 09:51:23.230751393 +0000 UTC m=+1157.318447263
functional_test.go:1645: (dbg) Run:  kubectl --context functional-558764 describe po hello-node-connect-7d85dfc575-pq82f -n default
functional_test.go:1645: (dbg) kubectl --context functional-558764 describe po hello-node-connect-7d85dfc575-pq82f -n default:
Name:             hello-node-connect-7d85dfc575-pq82f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558764/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:41:22 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szt5x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-szt5x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-pq82f to functional-558764
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-558764 logs hello-node-connect-7d85dfc575-pq82f -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-558764 logs hello-node-connect-7d85dfc575-pq82f -n default: exit status 1 (65.532624ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-pq82f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-558764 logs hello-node-connect-7d85dfc575-pq82f -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-558764 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-pq82f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558764/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:41:22 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szt5x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-szt5x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-pq82f to functional-558764
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-558764 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-558764 logs -l app=hello-node-connect: exit status 1 (63.567916ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-pq82f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-558764 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-558764 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.206.133
IPs:                      10.108.206.133
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30199/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-558764
helpers_test.go:243: (dbg) docker inspect functional-558764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97",
	        "Created": "2025-10-25T09:39:02.863375302Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349837,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:39:02.900142857Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97/hostname",
	        "HostsPath": "/var/lib/docker/containers/18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97/hosts",
	        "LogPath": "/var/lib/docker/containers/18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97/18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97-json.log",
	        "Name": "/functional-558764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-558764:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-558764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18b1367ea72ee794a8bf2fd1f8ea840efb382ef184933b86410a06e9ff80ff97",
	                "LowerDir": "/var/lib/docker/overlay2/02c781c266a1636013badeac697aa72cace66cc14189fff44b42b22f3a90d6e4-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02c781c266a1636013badeac697aa72cace66cc14189fff44b42b22f3a90d6e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02c781c266a1636013badeac697aa72cace66cc14189fff44b42b22f3a90d6e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02c781c266a1636013badeac697aa72cace66cc14189fff44b42b22f3a90d6e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-558764",
	                "Source": "/var/lib/docker/volumes/functional-558764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-558764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-558764",
	                "name.minikube.sigs.k8s.io": "functional-558764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6695f65b113b975f3eb1983013e06d2a320cb93ae1ec8c5a12ebf3349e1e60fe",
	            "SandboxKey": "/var/run/docker/netns/6695f65b113b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-558764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:01:03:0c:3d:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3097fd519916c1121d107f7b64da0e06c30411de88520c2313f7d0632bd30e0",
	                    "EndpointID": "ea979f625e0e7a3de10d17898e8b4c7ef345ee51d645633d7f45de349a51f9b0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-558764",
	                        "18b1367ea72e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-558764 -n functional-558764
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 logs -n 25: (1.373259291s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-558764 ssh sudo cat /usr/share/ca-certificates/3254552.pem                                                                                           │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ ssh            │ functional-558764 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ ssh            │ functional-558764 ssh sudo cat /etc/test/nested/copy/325455/hosts                                                                                               │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image load --daemon kicbase/echo-server:functional-558764 --alsologtostderr                                                                   │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls                                                                                                                                      │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image save kicbase/echo-server:functional-558764 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image rm kicbase/echo-server:functional-558764 --alsologtostderr                                                                              │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls                                                                                                                                      │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image save --daemon kicbase/echo-server:functional-558764 --alsologtostderr                                                                   │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ update-context │ functional-558764 update-context --alsologtostderr -v=2                                                                                                         │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ update-context │ functional-558764 update-context --alsologtostderr -v=2                                                                                                         │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ update-context │ functional-558764 update-context --alsologtostderr -v=2                                                                                                         │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls --format short --alsologtostderr                                                                                                     │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ ssh            │ functional-558764 ssh pgrep buildkitd                                                                                                                           │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │                     │
	│ image          │ functional-558764 image build -t localhost/my-image:functional-558764 testdata/build --alsologtostderr                                                          │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls                                                                                                                                      │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls --format json --alsologtostderr                                                                                                      │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls --format table --alsologtostderr                                                                                                     │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ image          │ functional-558764 image ls --format yaml --alsologtostderr                                                                                                      │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ service        │ functional-558764 service list                                                                                                                                  │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:51 UTC │ 25 Oct 25 09:51 UTC │
	│ service        │ functional-558764 service list -o json                                                                                                                          │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:51 UTC │ 25 Oct 25 09:51 UTC │
	│ service        │ functional-558764 service --namespace=default --https --url hello-node                                                                                          │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:51 UTC │                     │
	│ service        │ functional-558764 service hello-node --url --format={{.IP}}                                                                                                     │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:51 UTC │                     │
	│ service        │ functional-558764 service hello-node --url                                                                                                                      │ functional-558764 │ jenkins │ v1.37.0 │ 25 Oct 25 09:51 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:41:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:41:15.807729  358517 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:41:15.808042  358517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:15.808055  358517 out.go:374] Setting ErrFile to fd 2...
	I1025 09:41:15.808061  358517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:15.808383  358517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:41:15.808852  358517 out.go:368] Setting JSON to false
	I1025 09:41:15.810082  358517 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5025,"bootTime":1761380251,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:41:15.810222  358517 start.go:141] virtualization: kvm guest
	I1025 09:41:15.812239  358517 out.go:179] * [functional-558764] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:41:15.813499  358517 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:41:15.813479  358517 notify.go:220] Checking for updates...
	I1025 09:41:15.817587  358517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:41:15.818944  358517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:41:15.820286  358517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:41:15.821529  358517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:41:15.822909  358517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:41:15.826063  358517 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:15.826551  358517 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:41:15.852336  358517 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:41:15.852426  358517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:41:15.929334  358517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-25 09:41:15.913485983 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:41:15.929473  358517 docker.go:318] overlay module found
	I1025 09:41:15.932511  358517 out.go:179] * Using the docker driver based on existing profile
	I1025 09:41:15.933994  358517 start.go:305] selected driver: docker
	I1025 09:41:15.934016  358517 start.go:925] validating driver "docker" against &{Name:functional-558764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:15.934133  358517 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:41:15.934238  358517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:41:16.026593  358517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-25 09:41:16.014935787 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:41:16.027289  358517 cni.go:84] Creating CNI manager for ""
	I1025 09:41:16.027380  358517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:41:16.027451  358517 start.go:349] cluster config:
	{Name:functional-558764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:16.029642  358517 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.609680639Z" level=info msg="Creating container: default/mysql-5bb876957f-slf9s/mysql" id=b396ba11-e737-408c-836a-f35449ed265e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.609862768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.615479685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.616080669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.649852049Z" level=info msg="Created container 0740a8d5abaf01294caf775cef3b072cb39f6b78985edd6398db33f7efef1015: default/mysql-5bb876957f-slf9s/mysql" id=b396ba11-e737-408c-836a-f35449ed265e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.650749091Z" level=info msg="Starting container: 0740a8d5abaf01294caf775cef3b072cb39f6b78985edd6398db33f7efef1015" id=1ca347ca-0eca-47bf-8abc-a84555b91957 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:41:53 functional-558764 crio[3549]: time="2025-10-25T09:41:53.652594258Z" level=info msg="Started container" PID=7438 containerID=0740a8d5abaf01294caf775cef3b072cb39f6b78985edd6398db33f7efef1015 description=default/mysql-5bb876957f-slf9s/mysql id=1ca347ca-0eca-47bf-8abc-a84555b91957 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16c3395f334388ea1e9d8f6a4036cbe54554bba534bcf1204515611e9fb9367e
	Oct 25 09:41:55 functional-558764 crio[3549]: time="2025-10-25T09:41:55.560982824Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e706c120-43a6-4e43-8568-aa4d79a40dc8 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:42:01 functional-558764 crio[3549]: time="2025-10-25T09:42:01.561404971Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2526257f-c7a4-4383-8cc3-0290b843cece name=/runtime.v1.ImageService/PullImage
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.556128311Z" level=info msg="Stopping pod sandbox: faaa2d5f09743a1bee0b3cded2f47807201986744d1770fc0d7b6f480074fd45" id=3982d3b6-b247-481a-b428-599f96bfba96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.55618255Z" level=info msg="Stopped pod sandbox (already stopped): faaa2d5f09743a1bee0b3cded2f47807201986744d1770fc0d7b6f480074fd45" id=3982d3b6-b247-481a-b428-599f96bfba96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.556479931Z" level=info msg="Removing pod sandbox: faaa2d5f09743a1bee0b3cded2f47807201986744d1770fc0d7b6f480074fd45" id=d9dc8bc7-c646-46e1-8306-d8d2726e9d73 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.559494073Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.559560091Z" level=info msg="Removed pod sandbox: faaa2d5f09743a1bee0b3cded2f47807201986744d1770fc0d7b6f480074fd45" id=d9dc8bc7-c646-46e1-8306-d8d2726e9d73 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.560107969Z" level=info msg="Stopping pod sandbox: ea76cafca02ae6a513c47bc84e9afe9a8d20d523577d7498e890ba42725569c0" id=f578d93f-8b00-4264-92e0-5868ad1fafb0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.560168594Z" level=info msg="Stopped pod sandbox (already stopped): ea76cafca02ae6a513c47bc84e9afe9a8d20d523577d7498e890ba42725569c0" id=f578d93f-8b00-4264-92e0-5868ad1fafb0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.560541174Z" level=info msg="Removing pod sandbox: ea76cafca02ae6a513c47bc84e9afe9a8d20d523577d7498e890ba42725569c0" id=b6bb877d-dc73-41ff-b072-8df2d7084ab8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.562842809Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:42:12 functional-558764 crio[3549]: time="2025-10-25T09:42:12.562898035Z" level=info msg="Removed pod sandbox: ea76cafca02ae6a513c47bc84e9afe9a8d20d523577d7498e890ba42725569c0" id=b6bb877d-dc73-41ff-b072-8df2d7084ab8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:42:43 functional-558764 crio[3549]: time="2025-10-25T09:42:43.561202254Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5d95fb5c-1e0f-4d2e-a5ac-a15ba5b45c68 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:42:48 functional-558764 crio[3549]: time="2025-10-25T09:42:48.561219151Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2e34cef4-9d0f-416a-99de-44e43c3c7b75 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:44:11 functional-558764 crio[3549]: time="2025-10-25T09:44:11.561938706Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f6b79304-9041-4ba2-80cf-b910b27b8e05 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:44:15 functional-558764 crio[3549]: time="2025-10-25T09:44:15.561881402Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dded7889-a210-4811-b425-835b7786ccbb name=/runtime.v1.ImageService/PullImage
	Oct 25 09:46:58 functional-558764 crio[3549]: time="2025-10-25T09:46:58.563387914Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6b4a31a2-6133-42e0-ba0c-082fc1bfe8f9 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:47:00 functional-558764 crio[3549]: time="2025-10-25T09:47:00.562633071Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=acbb4a9c-cc01-48c3-8662-82808ec6d3cc name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0740a8d5abaf0       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   16c3395f33438       mysql-5bb876957f-slf9s                       default
	162a000ec7bbf       docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8                  9 minutes ago       Running             myfrontend                  0                   69a9b03d25ba3       sp-pod                                       default
	05959414a0af2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   28064fc8aded1       busybox-mount                                default
	f990f2af98ec9       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  9 minutes ago       Running             nginx                       0                   ecd8ff159c64c       nginx-svc                                    default
	9986ae1f7872b       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   91468bed31f85       dashboard-metrics-scraper-77bf4d6c4c-m9rhq   kubernetes-dashboard
	14218f74b64f0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   264b38d200fec       kubernetes-dashboard-855c9754f9-skv4s        kubernetes-dashboard
	a7dee3581b262       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   e411ec367cb94       storage-provisioner                          kube-system
	bc67fc0880015       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   2a076f8cbaab1       kube-apiserver-functional-558764             kube-system
	b53dafc030d02       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   9e6b2dd745f6b       kube-controller-manager-functional-558764    kube-system
	299c4ed37ac60       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   a5a188bb418c3       kube-scheduler-functional-558764             kube-system
	1dbda8a873eca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   60e693e45b14c       etcd-functional-558764                       kube-system
	46b48a390c5b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   e411ec367cb94       storage-provisioner                          kube-system
	23c85d4f9878a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   761c3d3504885       coredns-66bc5c9577-5mb2p                     kube-system
	7a760fdbb3d39       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   381cd1fd47ae3       kube-proxy-7j8x4                             kube-system
	3036c9a592f3b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   a68e68be51386       kindnet-rnsq9                                kube-system
	6f40f797ed1b5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   761c3d3504885       coredns-66bc5c9577-5mb2p                     kube-system
	15a00cf77c441       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   a68e68be51386       kindnet-rnsq9                                kube-system
	1cae7c6166dc1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   381cd1fd47ae3       kube-proxy-7j8x4                             kube-system
	212be9ab6fb74       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   a5a188bb418c3       kube-scheduler-functional-558764             kube-system
	c91f0f15f8089       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   60e693e45b14c       etcd-functional-558764                       kube-system
	3467323e29d70       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   9e6b2dd745f6b       kube-controller-manager-functional-558764    kube-system
	
	
	==> coredns [23c85d4f9878ac9c5cbe9a1400b3c4cf121f31f6b0a3511a6f0837de4a5447fb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37276 - 8398 "HINFO IN 1380494043955035761.5579469869762456288. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07597508s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6f40f797ed1b5c0f554aac27d3152436e0dfece8ffb0a073dcec8815deee2011] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57314 - 58139 "HINFO IN 6499831345183014793.7426191030435339144. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143369661s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-558764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-558764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-558764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_39_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:39:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-558764
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:51:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:51:16 +0000   Sat, 25 Oct 2025 09:39:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:51:16 +0000   Sat, 25 Oct 2025 09:39:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:51:16 +0000   Sat, 25 Oct 2025 09:39:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:51:16 +0000   Sat, 25 Oct 2025 09:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-558764
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7ce47844-906a-4267-8594-04dd022cc280
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jfpzl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-pq82f           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-slf9s                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m38s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-66bc5c9577-5mb2p                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-558764                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-rnsq9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-558764              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-558764     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7j8x4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-558764              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m9rhq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-skv4s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-558764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-558764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-558764 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-558764 event: Registered Node functional-558764 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-558764 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x9 over 11m)  kubelet          Node functional-558764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-558764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-558764 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-558764 event: Registered Node functional-558764 in Controller
	
	
	==> dmesg <==
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 16 b3 d7 05 74 b5 08 06
	[ +20.912051] IPv4: martian source 10.244.0.1 from 10.244.0.53, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e b0 a7 e4 38 e4 08 06
	[Oct25 09:35] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.057046] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023954] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023909] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023917] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +4.031795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +8.447358] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[ +16.382923] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 09:36] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	
	
	==> etcd [1dbda8a873eca575946be84bd45d40248d97211076ec4d348561795de0f52ef8] <==
	{"level":"warn","ts":"2025-10-25T09:40:32.835706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.843520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.855522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.862639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.869832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.879996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.892544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.900281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.907738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.915131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.923267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.931781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.939551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.946266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.953341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.961063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.968356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.983660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.988266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:32.994773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:33.002336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:40:33.059005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54418","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:50:32.506309Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1112}
	{"level":"info","ts":"2025-10-25T09:50:32.525893Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1112,"took":"19.138581ms","hash":1628694106,"current-db-size-bytes":3342336,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-25T09:50:32.525951Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1628694106,"revision":1112,"compact-revision":-1}
	
	
	==> etcd [c91f0f15f808977c203bf3756dfc928d8764f5a2e4f6f911448d23641714c6f4] <==
	{"level":"warn","ts":"2025-10-25T09:39:14.172076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:39:14.178389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:39:14.198695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:39:14.202334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:39:14.208773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:39:14.215632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:39:14.261605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:40:10.639515Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:40:10.639631Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-558764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:40:10.639745Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:40:10.641407Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:40:10.641472Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:40:10.641522Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-25T09:40:10.641587Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:40:10.641618Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:40:10.641633Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:40:10.641615Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-25T09:40:10.641643Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T09:40:10.641664Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:40:10.641709Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:40:10.641724Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:40:10.643543Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:40:10.643617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:40:10.643655Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:40:10.643667Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-558764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:51:24 up  1:33,  0 user,  load average: 0.38, 1.34, 20.89
	Linux functional-558764 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [15a00cf77c4414580ec9dba27cdab81991c74598d310878e79fc074264c0025b] <==
	I1025 09:39:23.066768       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:39:23.067022       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 09:39:23.067164       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:39:23.067182       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:39:23.067211       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:39:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:39:23.363555       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:39:23.364400       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:39:23.364441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:39:23.364651       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:39:23.730184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:39:23.730220       1 metrics.go:72] Registering metrics
	I1025 09:39:23.762264       1 controller.go:711] "Syncing nftables rules"
	I1025 09:39:33.365254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:39:33.365384       1 main.go:301] handling current node
	I1025 09:39:43.368178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:39:43.368230       1 main.go:301] handling current node
	I1025 09:39:53.364109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:39:53.364166       1 main.go:301] handling current node
	
	
	==> kindnet [3036c9a592f3be456734de80f3dae72a6a8854f433d23349d52cf3a1198c4d8e] <==
	I1025 09:49:20.735990       1 main.go:301] handling current node
	I1025 09:49:30.735827       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:30.735866       1 main.go:301] handling current node
	I1025 09:49:40.743696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:40.743738       1 main.go:301] handling current node
	I1025 09:49:50.740996       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:50.741038       1 main.go:301] handling current node
	I1025 09:50:00.740456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:00.740493       1 main.go:301] handling current node
	I1025 09:50:10.735472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:10.735510       1 main.go:301] handling current node
	I1025 09:50:20.742511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:20.742551       1 main.go:301] handling current node
	I1025 09:50:30.737142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:30.737266       1 main.go:301] handling current node
	I1025 09:50:40.736891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:40.736929       1 main.go:301] handling current node
	I1025 09:50:50.743926       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:50.743961       1 main.go:301] handling current node
	I1025 09:51:00.744289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:00.744346       1 main.go:301] handling current node
	I1025 09:51:10.736268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:10.736344       1 main.go:301] handling current node
	I1025 09:51:20.738419       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:20.738477       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bc67fc088001540a68fe0f20e5acdba7239aaac28d1796ccbc55ae5e2eea37ef] <==
	I1025 09:40:33.545721       1 policy_source.go:240] refreshing policies
	I1025 09:40:33.561041       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:40:34.435396       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:40:34.670117       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1025 09:40:34.742903       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1025 09:40:34.744286       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:40:34.749193       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:40:35.431359       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:40:35.536603       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:40:35.594815       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:40:35.603344       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:40:37.347845       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:41:10.289047       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.108.182"}
	I1025 09:41:14.973577       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.207.153"}
	I1025 09:41:17.038491       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:41:17.205575       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.141.91"}
	I1025 09:41:17.223259       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.194.25"}
	I1025 09:41:18.150349       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.41.124"}
	I1025 09:41:22.855503       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.206.133"}
	E1025 09:41:35.190022       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58350: use of closed network connection
	E1025 09:41:44.088290       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58384: use of closed network connection
	I1025 09:41:46.390468       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.227.13"}
	E1025 09:42:00.519545       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54738: use of closed network connection
	E1025 09:42:02.101660       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54752: use of closed network connection
	I1025 09:50:33.468273       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3467323e29d700bc81a2d0d8d9ac54c64498c60085582f874e9b879c9b9fedd2] <==
	I1025 09:39:21.624114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:39:21.624133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:39:21.633422       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:39:21.640707       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:39:21.660574       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:39:21.661806       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:39:21.661822       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:39:21.661829       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:39:21.662047       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:39:21.662104       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:39:21.662126       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:39:21.662176       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:39:21.662270       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:39:21.662346       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:39:21.662512       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:39:21.662674       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:39:21.663900       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:39:21.663937       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:39:21.663978       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:39:21.664432       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:39:21.664813       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:39:21.668661       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:39:21.668712       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:39:21.680912       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:39:36.612748       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [b53dafc030d0216bab3ebf6cc3ebbbb6fb326c6c073e43287f4e0e55a96c7cc2] <==
	I1025 09:40:36.894713       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:40:36.894729       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:40:36.897762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:40:36.897777       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:40:36.900021       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:40:36.900031       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:40:36.900111       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:40:36.900183       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:40:36.900225       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:40:36.900245       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:40:36.902694       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:40:36.904301       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:40:36.905875       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:40:36.906694       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:40:36.992853       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:40:36.992883       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:40:36.992890       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:40:37.006094       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:41:17.104707       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:41:17.110392       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:41:17.115859       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:41:17.116047       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:41:17.122504       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:41:17.127452       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:41:17.127605       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1cae7c6166dc105337eccc6569ae603b165a38d64c72b54d85a2125443cbfe94] <==
	I1025 09:39:22.944577       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:39:23.009166       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:39:23.109531       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:39:23.109573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:39:23.109671       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:39:23.130406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:39:23.130462       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:39:23.136605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:39:23.137209       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:39:23.137259       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:39:23.140928       1 config.go:200] "Starting service config controller"
	I1025 09:39:23.140948       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:39:23.141060       1 config.go:309] "Starting node config controller"
	I1025 09:39:23.141118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:39:23.141130       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:39:23.141217       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:39:23.141303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:39:23.141291       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:39:23.141360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:39:23.241492       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:39:23.241572       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:39:23.241572       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7a760fdbb3d395e7db61d1f5c6b7c88a246e4dfc0b4928086755f55185569b03] <==
	E1025 09:40:00.403421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-558764&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:40:01.366835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-558764&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:40:03.623506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-558764&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:40:09.401467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-558764&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:40:31.747800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-558764&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:43970->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1025 09:40:53.602597       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:40:53.602662       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:40:53.602778       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:40:53.623816       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:40:53.623871       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:40:53.630132       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:40:53.630494       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:40:53.630527       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:40:53.631716       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:40:53.631734       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:40:53.631768       1 config.go:200] "Starting service config controller"
	I1025 09:40:53.631776       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:40:53.631798       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:40:53.631813       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:40:53.631838       1 config.go:309] "Starting node config controller"
	I1025 09:40:53.631853       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:40:53.631860       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:40:53.732448       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:40:53.732464       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:40:53.732499       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [212be9ab6fb74794ecaa3640d1f5f9f4d525bd7e4ef2998ec87b97aba6729c22] <==
	E1025 09:39:14.680066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:39:14.680091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:39:14.680172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:39:14.680302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:39:14.680361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:39:15.570644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:39:15.571776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:39:15.655259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:39:15.673488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:39:15.684950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:39:15.702408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:39:15.723760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:39:15.732907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:39:15.788579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:39:15.875723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:39:15.895372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:39:15.929605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:39:16.005935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:39:18.076488       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:40:10.530225       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:40:10.530291       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 09:40:10.530444       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 09:40:10.530463       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1025 09:40:10.530436       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1025 09:40:10.530491       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [299c4ed37ac6089e902cd71e2dbc8533c1c15776b34f3090b8391ceb01997aa7] <==
	I1025 09:40:32.602765       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:40:33.460527       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:40:33.460577       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:40:33.460591       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:40:33.460600       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:40:33.478222       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:40:33.478249       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:40:33.480196       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:40:33.480243       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:40:33.480557       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:40:33.480615       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:40:33.581305       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:48:43 functional-558764 kubelet[4087]: E1025 09:48:43.561490    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:48:55 functional-558764 kubelet[4087]: E1025 09:48:55.560956    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:48:58 functional-558764 kubelet[4087]: E1025 09:48:58.560820    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:49:07 functional-558764 kubelet[4087]: E1025 09:49:07.561154    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:49:11 functional-558764 kubelet[4087]: E1025 09:49:11.560940    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:49:19 functional-558764 kubelet[4087]: E1025 09:49:19.561517    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:49:23 functional-558764 kubelet[4087]: E1025 09:49:23.561003    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:49:32 functional-558764 kubelet[4087]: E1025 09:49:32.561737    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:49:37 functional-558764 kubelet[4087]: E1025 09:49:37.561297    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:49:43 functional-558764 kubelet[4087]: E1025 09:49:43.560943    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:49:52 functional-558764 kubelet[4087]: E1025 09:49:52.561612    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:49:55 functional-558764 kubelet[4087]: E1025 09:49:55.561352    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:50:07 functional-558764 kubelet[4087]: E1025 09:50:07.561297    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:50:08 functional-558764 kubelet[4087]: E1025 09:50:08.560879    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:50:19 functional-558764 kubelet[4087]: E1025 09:50:19.560835    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:50:20 functional-558764 kubelet[4087]: E1025 09:50:20.560643    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:50:30 functional-558764 kubelet[4087]: E1025 09:50:30.561433    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:50:33 functional-558764 kubelet[4087]: E1025 09:50:33.560836    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:50:41 functional-558764 kubelet[4087]: E1025 09:50:41.560475    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:50:48 functional-558764 kubelet[4087]: E1025 09:50:48.561404    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:50:54 functional-558764 kubelet[4087]: E1025 09:50:54.560865    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:50:59 functional-558764 kubelet[4087]: E1025 09:50:59.560422    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:51:06 functional-558764 kubelet[4087]: E1025 09:51:06.561251    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	Oct 25 09:51:12 functional-558764 kubelet[4087]: E1025 09:51:12.561860    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pq82f" podUID="b619c872-a2bf-4835-99db-4e339351cc4a"
	Oct 25 09:51:20 functional-558764 kubelet[4087]: E1025 09:51:20.560548    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jfpzl" podUID="f2c95397-8b70-41f1-8286-908c7424fa72"
	
	
	==> kubernetes-dashboard [14218f74b64f006d91d6c523cd0072e9c3baec26f8e94d40246c0b649580921d] <==
	2025/10/25 09:41:20 Using namespace: kubernetes-dashboard
	2025/10/25 09:41:20 Using in-cluster config to connect to apiserver
	2025/10/25 09:41:20 Using secret token for csrf signing
	2025/10/25 09:41:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:41:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:41:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:41:21 Generating JWE encryption key
	2025/10/25 09:41:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:41:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:41:21 Initializing JWE encryption key from synchronized object
	2025/10/25 09:41:21 Creating in-cluster Sidecar client
	2025/10/25 09:41:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:41:21 Serving insecurely on HTTP port: 9090
	2025/10/25 09:41:51 Successful request to sidecar
	2025/10/25 09:41:20 Starting overwatch
	
	
	==> storage-provisioner [46b48a390c5b664121ea6f418a65e6aef521f4eab455ed9e243399080ca63ec6] <==
	I1025 09:40:00.295769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:40:00.297292       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a7dee3581b262232394d26ce88dd37305da4008fd3df2605150cb2ec57362ee5] <==
	W1025 09:51:00.964334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:02.967668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:02.971817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:04.975566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:04.981189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:06.984261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:06.988867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:08.992039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:08.996003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:10.999361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:11.003482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:13.006592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:13.011602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:15.015070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:15.020091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:17.023443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:17.027826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:19.031745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:19.037914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:21.041515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:21.046060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:23.049481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:23.054160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:25.058032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:51:25.063011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-558764 -n functional-558764
helpers_test.go:269: (dbg) Run:  kubectl --context functional-558764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-jfpzl hello-node-connect-7d85dfc575-pq82f
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-558764 describe pod busybox-mount hello-node-75c85bcc94-jfpzl hello-node-connect-7d85dfc575-pq82f
helpers_test.go:290: (dbg) kubectl --context functional-558764 describe pod busybox-mount hello-node-75c85bcc94-jfpzl hello-node-connect-7d85dfc575-pq82f:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558764/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:41:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://05959414a0af2e11b6276e711d38a25c453cc421b0e78afa68f6159186b81c7a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:41:34 +0000
	      Finished:     Sat, 25 Oct 2025 09:41:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hglxr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hglxr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-558764
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.022s (2.022s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-jfpzl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558764/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:41:14 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d7xwt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d7xwt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jfpzl to functional-558764
	  Normal   Pulling    7m14s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m14s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m14s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x43 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5s (x43 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-pq82f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558764/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:41:22 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szt5x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-szt5x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-pq82f to functional-558764
	  Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m54s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m54s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-558764 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-558764 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jfpzl" [f2c95397-8b70-41f1-8286-908c7424fa72] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-558764 -n functional-558764
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-25 09:51:15.33667833 +0000 UTC m=+1149.424374213
functional_test.go:1460: (dbg) Run:  kubectl --context functional-558764 describe po hello-node-75c85bcc94-jfpzl -n default
functional_test.go:1460: (dbg) kubectl --context functional-558764 describe po hello-node-75c85bcc94-jfpzl -n default:
Name:             hello-node-75c85bcc94-jfpzl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558764/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:41:14 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d7xwt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-d7xwt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jfpzl to functional-558764
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-558764 logs hello-node-75c85bcc94-jfpzl -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-558764 logs hello-node-75c85bcc94-jfpzl -n default: exit status 1 (70.846414ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-jfpzl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-558764 logs hello-node-75c85bcc94-jfpzl -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 image ls --format short --alsologtostderr: (2.274014282s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558764 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558764 image ls --format short --alsologtostderr:
I1025 09:41:50.261001  365648 out.go:360] Setting OutFile to fd 1 ...
I1025 09:41:50.261314  365648 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:50.261341  365648 out.go:374] Setting ErrFile to fd 2...
I1025 09:41:50.261348  365648 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:50.261595  365648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
I1025 09:41:50.262420  365648 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:50.262574  365648 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:50.263055  365648 cli_runner.go:164] Run: docker container inspect functional-558764 --format={{.State.Status}}
I1025 09:41:50.286817  365648 ssh_runner.go:195] Run: systemctl --version
I1025 09:41:50.286868  365648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558764
I1025 09:41:50.310704  365648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/functional-558764/id_rsa Username:docker}
I1025 09:41:50.418696  365648 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 09:41:52.446556  365648 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.027821619s)
W1025 09:41:52.446635  365648 cache_images.go:735] Failed to list images for profile functional-558764 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1025 09:41:52.443175    7224 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-10-25T09:41:52Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image load --daemon kicbase/echo-server:functional-558764 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558764" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image load --daemon kicbase/echo-server:functional-558764 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558764" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-558764
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image load --daemon kicbase/echo-server:functional-558764 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558764" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image save kicbase/echo-server:functional-558764 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 09:41:48.084555  365038 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:41:48.084841  365038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:48.084851  365038 out.go:374] Setting ErrFile to fd 2...
	I1025 09:41:48.084855  365038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:48.085073  365038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:41:48.085733  365038 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:48.085830  365038 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:48.086276  365038 cli_runner.go:164] Run: docker container inspect functional-558764 --format={{.State.Status}}
	I1025 09:41:48.110353  365038 ssh_runner.go:195] Run: systemctl --version
	I1025 09:41:48.110420  365038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558764
	I1025 09:41:48.136673  365038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/functional-558764/id_rsa Username:docker}
	I1025 09:41:48.251640  365038 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1025 09:41:48.251722  365038 cache_images.go:254] Failed to load cached images for "functional-558764": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1025 09:41:48.251755  365038 cache_images.go:266] failed pushing to: functional-558764

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-558764
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image save --daemon kicbase/echo-server:functional-558764 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-558764
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-558764: exit status 1 (26.898921ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-558764

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-558764

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 service --namespace=default --https --url hello-node: exit status 115 (562.761535ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31993
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-558764 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 service hello-node --url --format={{.IP}}: exit status 115 (560.770913ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-558764 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 service hello-node --url: exit status 115 (561.756879ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31993
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-558764 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31993
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-513456 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-513456 --output=json --user=testUser: exit status 80 (2.502950017s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"62c682bf-021d-4886-b4c7-aacf1d4dc96e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-513456 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"05057af6-5848-4a09-ab97-d9d731403282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T10:00:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"284c04bd-1cef-41ba-a720-a283a18ec20c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-513456 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.19s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-513456 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-513456 --output=json --user=testUser: exit status 80 (2.18680051s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4eb10dba-56ee-42b1-a2a5-92b8b9c53f77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-513456 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"605d16e1-fb50-485b-9808-8986ead4b411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T10:00:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"98581fdc-b856-4fc2-ba49-c4306e7d08d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-513456 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.19s)

                                                
                                    
x
+
TestPause/serial/Pause (7.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-200480 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-200480 --alsologtostderr -v=5: exit status 80 (2.39688349s)

                                                
                                                
-- stdout --
	* Pausing node pause-200480 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:13:50.148910  509507 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:50.149058  509507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:50.149069  509507 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:50.149076  509507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:50.149314  509507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:13:50.149604  509507 out.go:368] Setting JSON to false
	I1025 10:13:50.149667  509507 mustload.go:65] Loading cluster: pause-200480
	I1025 10:13:50.150134  509507 config.go:182] Loaded profile config "pause-200480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:50.150741  509507 cli_runner.go:164] Run: docker container inspect pause-200480 --format={{.State.Status}}
	I1025 10:13:50.170453  509507 host.go:66] Checking if "pause-200480" exists ...
	I1025 10:13:50.170785  509507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:50.246340  509507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-25 10:13:50.233584614 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:13:50.247309  509507 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-200480 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:13:50.250967  509507 out.go:179] * Pausing node pause-200480 ... 
	I1025 10:13:50.252291  509507 host.go:66] Checking if "pause-200480" exists ...
	I1025 10:13:50.252664  509507 ssh_runner.go:195] Run: systemctl --version
	I1025 10:13:50.252723  509507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-200480
	I1025 10:13:50.279778  509507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/pause-200480/id_rsa Username:docker}
	I1025 10:13:50.389942  509507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:13:50.404818  509507 pause.go:52] kubelet running: true
	I1025 10:13:50.404886  509507 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:13:50.548829  509507 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:13:50.548955  509507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:13:50.622783  509507 cri.go:89] found id: "ec3f0975fec3d919f90773255c0149ca9e9d19ae6f9ec9a6fb3defbc4471e7cf"
	I1025 10:13:50.622808  509507 cri.go:89] found id: "fe27241176bf76993884109c1cdc551c32fe9af9f43fbb3aeae01048d5b1e4bf"
	I1025 10:13:50.622814  509507 cri.go:89] found id: "e080b5d65c56bd1b04301a4db5b669a2ce749613037e2227c561a39e07d71b3a"
	I1025 10:13:50.622819  509507 cri.go:89] found id: "a2bf1b0b7321a314961ca686d9983e6fcf281b2c4096cb1d82c060bdd8b0dc28"
	I1025 10:13:50.622823  509507 cri.go:89] found id: "27db2729cf64a2e9b1d06ef82efdd2cec3eeb410d21ea6d1ed35c44ba965cd5a"
	I1025 10:13:50.622828  509507 cri.go:89] found id: "74b63b63d97cbaf45ce6897ced783b4c3e4f98c71e66414df394bff0ac34580e"
	I1025 10:13:50.622833  509507 cri.go:89] found id: "a6c95c62c336d6d74920be9e94fef714f88e4bb1327664ce7a2283c01f3f72ce"
	I1025 10:13:50.622837  509507 cri.go:89] found id: ""
	I1025 10:13:50.622886  509507 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:13:50.638658  509507 retry.go:31] will retry after 271.793079ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:50Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:50.911209  509507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:13:50.932081  509507 pause.go:52] kubelet running: false
	I1025 10:13:50.932152  509507 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:13:51.094773  509507 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:13:51.094855  509507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:13:51.182129  509507 cri.go:89] found id: "ec3f0975fec3d919f90773255c0149ca9e9d19ae6f9ec9a6fb3defbc4471e7cf"
	I1025 10:13:51.182158  509507 cri.go:89] found id: "fe27241176bf76993884109c1cdc551c32fe9af9f43fbb3aeae01048d5b1e4bf"
	I1025 10:13:51.182163  509507 cri.go:89] found id: "e080b5d65c56bd1b04301a4db5b669a2ce749613037e2227c561a39e07d71b3a"
	I1025 10:13:51.182168  509507 cri.go:89] found id: "a2bf1b0b7321a314961ca686d9983e6fcf281b2c4096cb1d82c060bdd8b0dc28"
	I1025 10:13:51.182173  509507 cri.go:89] found id: "27db2729cf64a2e9b1d06ef82efdd2cec3eeb410d21ea6d1ed35c44ba965cd5a"
	I1025 10:13:51.182177  509507 cri.go:89] found id: "74b63b63d97cbaf45ce6897ced783b4c3e4f98c71e66414df394bff0ac34580e"
	I1025 10:13:51.182182  509507 cri.go:89] found id: "a6c95c62c336d6d74920be9e94fef714f88e4bb1327664ce7a2283c01f3f72ce"
	I1025 10:13:51.182187  509507 cri.go:89] found id: ""
	I1025 10:13:51.182233  509507 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:13:51.195912  509507 retry.go:31] will retry after 235.668656ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:51Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:51.432466  509507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:13:51.451990  509507 pause.go:52] kubelet running: false
	I1025 10:13:51.452051  509507 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:13:51.589408  509507 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:13:51.589506  509507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:13:51.674048  509507 cri.go:89] found id: "ec3f0975fec3d919f90773255c0149ca9e9d19ae6f9ec9a6fb3defbc4471e7cf"
	I1025 10:13:51.674078  509507 cri.go:89] found id: "fe27241176bf76993884109c1cdc551c32fe9af9f43fbb3aeae01048d5b1e4bf"
	I1025 10:13:51.674085  509507 cri.go:89] found id: "e080b5d65c56bd1b04301a4db5b669a2ce749613037e2227c561a39e07d71b3a"
	I1025 10:13:51.674090  509507 cri.go:89] found id: "a2bf1b0b7321a314961ca686d9983e6fcf281b2c4096cb1d82c060bdd8b0dc28"
	I1025 10:13:51.674094  509507 cri.go:89] found id: "27db2729cf64a2e9b1d06ef82efdd2cec3eeb410d21ea6d1ed35c44ba965cd5a"
	I1025 10:13:51.674098  509507 cri.go:89] found id: "74b63b63d97cbaf45ce6897ced783b4c3e4f98c71e66414df394bff0ac34580e"
	I1025 10:13:51.674102  509507 cri.go:89] found id: "a6c95c62c336d6d74920be9e94fef714f88e4bb1327664ce7a2283c01f3f72ce"
	I1025 10:13:51.674106  509507 cri.go:89] found id: ""
	I1025 10:13:51.674163  509507 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:13:51.690641  509507 retry.go:31] will retry after 362.780334ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:51Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:52.054348  509507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:13:52.069301  509507 pause.go:52] kubelet running: false
	I1025 10:13:52.069376  509507 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:13:52.187910  509507 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:13:52.187981  509507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:13:52.259642  509507 cri.go:89] found id: "ec3f0975fec3d919f90773255c0149ca9e9d19ae6f9ec9a6fb3defbc4471e7cf"
	I1025 10:13:52.259669  509507 cri.go:89] found id: "fe27241176bf76993884109c1cdc551c32fe9af9f43fbb3aeae01048d5b1e4bf"
	I1025 10:13:52.259675  509507 cri.go:89] found id: "e080b5d65c56bd1b04301a4db5b669a2ce749613037e2227c561a39e07d71b3a"
	I1025 10:13:52.259680  509507 cri.go:89] found id: "a2bf1b0b7321a314961ca686d9983e6fcf281b2c4096cb1d82c060bdd8b0dc28"
	I1025 10:13:52.259684  509507 cri.go:89] found id: "27db2729cf64a2e9b1d06ef82efdd2cec3eeb410d21ea6d1ed35c44ba965cd5a"
	I1025 10:13:52.259690  509507 cri.go:89] found id: "74b63b63d97cbaf45ce6897ced783b4c3e4f98c71e66414df394bff0ac34580e"
	I1025 10:13:52.259693  509507 cri.go:89] found id: "a6c95c62c336d6d74920be9e94fef714f88e4bb1327664ce7a2283c01f3f72ce"
	I1025 10:13:52.259697  509507 cri.go:89] found id: ""
	I1025 10:13:52.259746  509507 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:13:52.337143  509507 out.go:203] 
	W1025 10:13:52.343783  509507 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:13:52.343818  509507 out.go:285] * 
	* 
	W1025 10:13:52.347844  509507 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:13:52.473070  509507 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-200480 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-200480
helpers_test.go:243: (dbg) docker inspect pause-200480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836",
	        "Created": "2025-10-25T10:13:04.732729023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:13:05.57937843Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/hosts",
	        "LogPath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836-json.log",
	        "Name": "/pause-200480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-200480:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-200480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836",
	                "LowerDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-200480",
	                "Source": "/var/lib/docker/volumes/pause-200480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-200480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-200480",
	                "name.minikube.sigs.k8s.io": "pause-200480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a9c1b2227981efe0186de18e2155254d1788fc1de4bbd697b690cd8931c8f67",
	            "SandboxKey": "/var/run/docker/netns/8a9c1b222798",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-200480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:c8:36:c4:8d:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5b5f4eaeccab16064d966f84da86c61e47ae88cc4af80beaf17957998026d5d",
	                    "EndpointID": "6ac6d9b5aac6693236c2ec0efdc5d675693c820736a899643e66997f2663d32f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-200480",
	                        "b0bf98b8c658"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-200480 -n pause-200480
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-200480 -n pause-200480: exit status 2 (344.688157ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-200480 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-200480 logs -n 25: (2.161391428s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-514449 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:10 UTC │ 25 Oct 25 10:11 UTC │
	│ stop    │ -p scheduled-stop-514449 --schedule 5m                                                                                                   │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 5m                                                                                                   │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 5m                                                                                                   │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --cancel-scheduled                                                                                              │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │ 25 Oct 25 10:11 UTC │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │ 25 Oct 25 10:12 UTC │
	│ delete  │ -p scheduled-stop-514449                                                                                                                 │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ start   │ -p insufficient-storage-591590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-591590 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ delete  │ -p insufficient-storage-591590                                                                                                           │ insufficient-storage-591590 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ start   │ -p offline-crio-169271 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-169271         │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p pause-200480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-200480                │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p stopped-upgrade-291164 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-291164      │ jenkins │ v1.32.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p missing-upgrade-363411 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-363411      │ jenkins │ v1.32.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ stop    │ stopped-upgrade-291164 stop                                                                                                              │ stopped-upgrade-291164      │ jenkins │ v1.32.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p stopped-upgrade-291164 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-291164      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ start   │ -p missing-upgrade-363411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-363411      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ start   │ -p pause-200480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-200480                │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ delete  │ -p offline-crio-169271                                                                                                                   │ offline-crio-169271         │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-311859   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ pause   │ -p pause-200480 --alsologtostderr -v=5                                                                                                   │ pause-200480                │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:13:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:13:48.270543  508860 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:48.270853  508860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:48.270865  508860 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:48.270869  508860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:48.271120  508860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:13:48.271679  508860 out.go:368] Setting JSON to false
	I1025 10:13:48.272781  508860 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6977,"bootTime":1761380251,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:13:48.272847  508860 start.go:141] virtualization: kvm guest
	I1025 10:13:48.275108  508860 out.go:179] * [kubernetes-upgrade-311859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:13:48.276924  508860 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:48.276922  508860 notify.go:220] Checking for updates...
	I1025 10:13:48.279638  508860 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:48.281163  508860 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:13:48.285509  508860 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:13:48.286920  508860 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:13:48.288189  508860 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:13:48.290069  508860 config.go:182] Loaded profile config "missing-upgrade-363411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 10:13:48.290201  508860 config.go:182] Loaded profile config "pause-200480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:48.290294  508860 config.go:182] Loaded profile config "stopped-upgrade-291164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 10:13:48.290431  508860 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:48.319205  508860 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:13:48.319341  508860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:48.394163  508860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-25 10:13:48.381687367 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:13:48.394264  508860 docker.go:318] overlay module found
	I1025 10:13:48.396345  508860 out.go:179] * Using the docker driver based on user configuration
	I1025 10:13:48.399111  508860 start.go:305] selected driver: docker
	I1025 10:13:48.399134  508860 start.go:925] validating driver "docker" against <nil>
	I1025 10:13:48.399149  508860 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:48.399944  508860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:48.466520  508860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-25 10:13:48.454470985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:13:48.466722  508860 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:13:48.466934  508860 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 10:13:48.468980  508860 out.go:179] * Using Docker driver with root privileges
	I1025 10:13:48.470410  508860 cni.go:84] Creating CNI manager for ""
	I1025 10:13:48.470472  508860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:13:48.470487  508860 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:13:48.470588  508860 start.go:349] cluster config:
	{Name:kubernetes-upgrade-311859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-311859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:48.472180  508860 out.go:179] * Starting "kubernetes-upgrade-311859" primary control-plane node in "kubernetes-upgrade-311859" cluster
	I1025 10:13:48.473379  508860 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:48.474894  508860 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:48.476362  508860 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:13:48.476397  508860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:48.476409  508860 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 10:13:48.476420  508860 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:48.476517  508860 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:13:48.476528  508860 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:13:48.476686  508860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/config.json ...
	I1025 10:13:48.476710  508860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/config.json: {Name:mk44487fc2e94fa7be043d1f553824d3f3063775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.499967  508860 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:48.499991  508860 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:48.500010  508860 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:48.500052  508860 start.go:360] acquireMachinesLock for kubernetes-upgrade-311859: {Name:mk86159ddb1c244ee6d57a343afa2f3989c81171 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:48.500179  508860 start.go:364] duration metric: took 99.296µs to acquireMachinesLock for "kubernetes-upgrade-311859"
	I1025 10:13:48.500211  508860 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-311859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-311859 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:48.500293  508860 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:13:48.187465  506990 addons.go:514] duration metric: took 4.033091ms for enable addons: enabled=[]
	I1025 10:13:48.187510  506990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:48.307777  506990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:48.325131  506990 node_ready.go:35] waiting up to 6m0s for node "pause-200480" to be "Ready" ...
	I1025 10:13:48.334351  506990 node_ready.go:49] node "pause-200480" is "Ready"
	I1025 10:13:48.334385  506990 node_ready.go:38] duration metric: took 9.209665ms for node "pause-200480" to be "Ready" ...
	I1025 10:13:48.334404  506990 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:13:48.334467  506990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:48.352684  506990 api_server.go:72] duration metric: took 169.301508ms to wait for apiserver process to appear ...
	I1025 10:13:48.352727  506990 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:13:48.352750  506990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:13:48.358252  506990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:13:48.359310  506990 api_server.go:141] control plane version: v1.34.1
	I1025 10:13:48.359388  506990 api_server.go:131] duration metric: took 6.652432ms to wait for apiserver health ...
	I1025 10:13:48.359411  506990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:13:48.362905  506990 system_pods.go:59] 7 kube-system pods found
	I1025 10:13:48.362939  506990 system_pods.go:61] "coredns-66bc5c9577-dpc7k" [1170a8f9-34c4-4475-8133-52cc6e952076] Running
	I1025 10:13:48.362948  506990 system_pods.go:61] "etcd-pause-200480" [4c1da88c-b301-48c5-b38a-a44eed9e833d] Running
	I1025 10:13:48.362953  506990 system_pods.go:61] "kindnet-s7b7r" [d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237] Running
	I1025 10:13:48.362959  506990 system_pods.go:61] "kube-apiserver-pause-200480" [48b86f89-2f20-4480-9977-ecb83e7561ed] Running
	I1025 10:13:48.362964  506990 system_pods.go:61] "kube-controller-manager-pause-200480" [c1208adf-940e-434d-83bd-bc48516eea67] Running
	I1025 10:13:48.362969  506990 system_pods.go:61] "kube-proxy-9t747" [799bde2b-b5a9-41a7-a0d2-3651a174cf6f] Running
	I1025 10:13:48.362974  506990 system_pods.go:61] "kube-scheduler-pause-200480" [d1bbbb04-a730-4faa-87dc-a9c008d45697] Running
	I1025 10:13:48.362981  506990 system_pods.go:74] duration metric: took 3.55376ms to wait for pod list to return data ...
	I1025 10:13:48.362990  506990 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:13:48.365499  506990 default_sa.go:45] found service account: "default"
	I1025 10:13:48.365521  506990 default_sa.go:55] duration metric: took 2.524118ms for default service account to be created ...
	I1025 10:13:48.365531  506990 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:13:48.369387  506990 system_pods.go:86] 7 kube-system pods found
	I1025 10:13:48.369415  506990 system_pods.go:89] "coredns-66bc5c9577-dpc7k" [1170a8f9-34c4-4475-8133-52cc6e952076] Running
	I1025 10:13:48.369423  506990 system_pods.go:89] "etcd-pause-200480" [4c1da88c-b301-48c5-b38a-a44eed9e833d] Running
	I1025 10:13:48.369430  506990 system_pods.go:89] "kindnet-s7b7r" [d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237] Running
	I1025 10:13:48.369435  506990 system_pods.go:89] "kube-apiserver-pause-200480" [48b86f89-2f20-4480-9977-ecb83e7561ed] Running
	I1025 10:13:48.369455  506990 system_pods.go:89] "kube-controller-manager-pause-200480" [c1208adf-940e-434d-83bd-bc48516eea67] Running
	I1025 10:13:48.369465  506990 system_pods.go:89] "kube-proxy-9t747" [799bde2b-b5a9-41a7-a0d2-3651a174cf6f] Running
	I1025 10:13:48.369470  506990 system_pods.go:89] "kube-scheduler-pause-200480" [d1bbbb04-a730-4faa-87dc-a9c008d45697] Running
	I1025 10:13:48.369479  506990 system_pods.go:126] duration metric: took 3.940121ms to wait for k8s-apps to be running ...
	I1025 10:13:48.369491  506990 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:13:48.369654  506990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:13:48.387679  506990 system_svc.go:56] duration metric: took 18.178399ms WaitForService to wait for kubelet
	I1025 10:13:48.387713  506990 kubeadm.go:586] duration metric: took 204.335382ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:13:48.387743  506990 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:13:48.391209  506990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:13:48.391240  506990 node_conditions.go:123] node cpu capacity is 8
	I1025 10:13:48.391254  506990 node_conditions.go:105] duration metric: took 3.496857ms to run NodePressure ...
	I1025 10:13:48.391270  506990 start.go:241] waiting for startup goroutines ...
	I1025 10:13:48.391279  506990 start.go:246] waiting for cluster config update ...
	I1025 10:13:48.391288  506990 start.go:255] writing updated cluster config ...
	I1025 10:13:48.391678  506990 ssh_runner.go:195] Run: rm -f paused
	I1025 10:13:48.396037  506990 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:13:48.397409  506990 kapi.go:59] client config for pause-200480: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/pause-200480/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/pause-200480/client.key", CAFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:48.400951  506990 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dpc7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.406284  506990 pod_ready.go:94] pod "coredns-66bc5c9577-dpc7k" is "Ready"
	I1025 10:13:48.406311  506990 pod_ready.go:86] duration metric: took 5.333023ms for pod "coredns-66bc5c9577-dpc7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.408694  506990 pod_ready.go:83] waiting for pod "etcd-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.413161  506990 pod_ready.go:94] pod "etcd-pause-200480" is "Ready"
	I1025 10:13:48.413182  506990 pod_ready.go:86] duration metric: took 4.460876ms for pod "etcd-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.415379  506990 pod_ready.go:83] waiting for pod "kube-apiserver-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.419308  506990 pod_ready.go:94] pod "kube-apiserver-pause-200480" is "Ready"
	I1025 10:13:48.419360  506990 pod_ready.go:86] duration metric: took 3.956028ms for pod "kube-apiserver-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.421581  506990 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.801662  506990 pod_ready.go:94] pod "kube-controller-manager-pause-200480" is "Ready"
	I1025 10:13:48.801687  506990 pod_ready.go:86] duration metric: took 380.080306ms for pod "kube-controller-manager-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:49.001155  506990 pod_ready.go:83] waiting for pod "kube-proxy-9t747" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:49.401380  506990 pod_ready.go:94] pod "kube-proxy-9t747" is "Ready"
	I1025 10:13:49.401416  506990 pod_ready.go:86] duration metric: took 400.232623ms for pod "kube-proxy-9t747" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:49.602486  506990 pod_ready.go:83] waiting for pod "kube-scheduler-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:50.000972  506990 pod_ready.go:94] pod "kube-scheduler-pause-200480" is "Ready"
	I1025 10:13:50.001011  506990 pod_ready.go:86] duration metric: took 398.490536ms for pod "kube-scheduler-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:50.001028  506990 pod_ready.go:40] duration metric: took 1.604953034s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:13:50.051489  506990 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:13:50.057755  506990 out.go:179] * Done! kubectl is now configured to use "pause-200480" cluster and "default" namespace by default
	I1025 10:13:47.717003  505930 cli_runner.go:164] Run: docker network inspect stopped-upgrade-291164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:47.736362  505930 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:47.740742  505930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:47.753563  505930 kubeadm.go:883] updating cluster {Name:stopped-upgrade-291164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-291164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:47.753694  505930 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 10:13:47.753745  505930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:47.798965  505930 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:47.798988  505930 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:13:47.799045  505930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:47.835189  505930 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:47.835210  505930 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:47.835218  505930 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.28.3 crio true true} ...
	I1025 10:13:47.835335  505930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-291164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-291164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:47.835401  505930 ssh_runner.go:195] Run: crio config
	I1025 10:13:47.894593  505930 cni.go:84] Creating CNI manager for ""
	I1025 10:13:47.894616  505930 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:13:47.894636  505930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:47.894659  505930 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-291164 NodeName:stopped-upgrade-291164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:47.894843  505930 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-291164"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:47.894912  505930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 10:13:47.905036  505930 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:47.905116  505930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:13:47.914811  505930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1025 10:13:47.935547  505930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:47.956610  505930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1025 10:13:47.978665  505930 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:47.982845  505930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:47.996095  505930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:48.089530  505930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:48.110985  505930 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164 for IP: 192.168.103.2
	I1025 10:13:48.111007  505930 certs.go:195] generating shared ca certs ...
	I1025 10:13:48.111027  505930 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.111173  505930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:13:48.111223  505930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:13:48.111238  505930 certs.go:257] generating profile certs ...
	I1025 10:13:48.111368  505930 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/client.key
	I1025 10:13:48.111405  505930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4
	I1025 10:13:48.111436  505930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 10:13:48.275665  505930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4 ...
	I1025 10:13:48.275692  505930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4: {Name:mk049fb9885555d7528df76ea858b38a88968b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.275868  505930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4 ...
	I1025 10:13:48.275887  505930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4: {Name:mk21276022fecb336ea9df82bf35550d89bcaf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.276010  505930 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt
	I1025 10:13:48.276167  505930 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key
	I1025 10:13:48.276326  505930 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/proxy-client.key
	I1025 10:13:48.276440  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:13:48.276467  505930 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:48.276478  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:48.276498  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:48.276520  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:48.276549  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:13:48.276600  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:13:48.277391  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:48.307333  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:13:48.341481  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:48.379613  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:13:48.410893  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:13:48.448527  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:48.478208  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:48.506199  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:13:48.534433  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:48.562195  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:13:48.590008  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:13:48.617592  505930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:48.637143  505930 ssh_runner.go:195] Run: openssl version
	I1025 10:13:48.643375  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:48.655143  505930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:48.659278  505930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:48.659384  505930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:48.667144  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:48.678569  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:13:48.690100  505930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:13:48.694508  505930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:13:48.694569  505930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:13:48.704090  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:48.715686  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:13:48.726538  505930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:13:48.731788  505930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:13:48.731847  505930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:13:48.739136  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:48.749163  505930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:48.753352  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:48.761106  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:48.768168  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:48.775480  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:48.783485  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:48.790673  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:48.797475  505930 kubeadm.go:400] StartCluster: {Name:stopped-upgrade-291164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-291164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:48.797562  505930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:48.797610  505930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:48.841869  505930 cri.go:89] found id: ""
	I1025 10:13:48.841937  505930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1025 10:13:48.853914  505930 kubeadm.go:413] apiserver tunnel failed: apiserver port not set
	I1025 10:13:48.853949  505930 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:13:48.853960  505930 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:13:48.854009  505930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:13:48.864288  505930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:48.865157  505930 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-291164" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:13:48.865633  505930 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-291164" cluster setting kubeconfig missing "stopped-upgrade-291164" context setting]
	I1025 10:13:48.866195  505930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.866937  505930 kapi.go:59] client config for stopped-upgrade-291164: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/client.key", CAFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:48.867394  505930 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:13:48.867417  505930 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:13:48.867430  505930 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:13:48.867436  505930 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:13:48.867441  505930 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:13:48.867790  505930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:13:48.878540  505930 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-25 10:13:27.961571632 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-25 10:13:47.975887500 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1025 10:13:48.878566  505930 kubeadm.go:1160] stopping kube-system containers ...
	I1025 10:13:48.878585  505930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 10:13:48.878644  505930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:48.920510  505930 cri.go:89] found id: ""
	I1025 10:13:48.920575  505930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 10:13:48.945967  505930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:13:48.957940  505930 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 25 10:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Oct 25 10:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 25 10:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 25 10:13 /etc/kubernetes/scheduler.conf
	
	I1025 10:13:48.958018  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1025 10:13:48.968394  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:48.968573  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:13:48.979629  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1025 10:13:48.990998  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:48.991066  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:13:49.003432  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1025 10:13:49.015050  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:49.015120  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:13:49.039852  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1025 10:13:49.050793  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:49.050870  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:13:49.064547  505930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:13:49.074469  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:49.129921  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.013342  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.178067  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.258130  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.324835  505930 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:13:50.324917  505930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:50.825535  505930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:47.597440  506267 cli_runner.go:164] Run: docker container inspect missing-upgrade-363411 --format={{.State.Status}}
	W1025 10:13:47.618244  506267 cli_runner.go:211] docker container inspect missing-upgrade-363411 --format={{.State.Status}} returned with exit code 1
	I1025 10:13:47.618348  506267 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:47.618366  506267 oci.go:673] temporary error: container missing-upgrade-363411 status is  but expect it to be exited
	I1025 10:13:47.618419  506267 retry.go:31] will retry after 1.978580016s: couldn't verify container is exited. %v: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:49.598496  506267 cli_runner.go:164] Run: docker container inspect missing-upgrade-363411 --format={{.State.Status}}
	W1025 10:13:49.619686  506267 cli_runner.go:211] docker container inspect missing-upgrade-363411 --format={{.State.Status}} returned with exit code 1
	I1025 10:13:49.619768  506267 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:49.619782  506267 oci.go:673] temporary error: container missing-upgrade-363411 status is  but expect it to be exited
	I1025 10:13:49.619825  506267 retry.go:31] will retry after 3.772902502s: couldn't verify container is exited. %v: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	
	
	==> CRI-O <==
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.800937615Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.801813723Z" level=info msg="Conmon does support the --sync option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.801831534Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.801845411Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.802534359Z" level=info msg="Conmon does support the --sync option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.802548679Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.80662803Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.806665892Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.807433152Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.807856425Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.807910306Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.814017268Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.856574285Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-dpc7k Namespace:kube-system ID:34c206069e045b90b172264f76344fb2fe7adf569e64d0ac5d78f45644d44541 UID:1170a8f9-34c4-4475-8133-52cc6e952076 NetNS:/var/run/netns/ac65af99-b58a-4cd9-914d-a34bf8be9107 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a318}] Aliases:map[]}"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.8567783Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-dpc7k for CNI network kindnet (type=ptp)"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.85776868Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.857822448Z" level=info msg="Starting seccomp notifier watcher"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.857950633Z" level=info msg="Create NRI interface"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858212689Z" level=info msg="built-in NRI default validator is disabled"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858225972Z" level=info msg="runtime interface created"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858247823Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858256525Z" level=info msg="runtime interface starting up..."
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858266769Z" level=info msg="starting plugins..."
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858289552Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.859204627Z" level=info msg="No systemd watchdog enabled"
	Oct 25 10:13:46 pause-200480 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ec3f0975fec3d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   34c206069e045       coredns-66bc5c9577-dpc7k               kube-system
	fe27241176bf7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   4c1a0ea09c9ce       kube-proxy-9t747                       kube-system
	e080b5d65c56b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   5dd0e5d32784a       kindnet-s7b7r                          kube-system
	a2bf1b0b7321a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   19a333cb1dce3       kube-scheduler-pause-200480            kube-system
	27db2729cf64a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   52c9e2175a8d7       kube-controller-manager-pause-200480   kube-system
	74b63b63d97cb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   dfe645817e199       kube-apiserver-pause-200480            kube-system
	a6c95c62c336d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   e53a0fb3de7e9       etcd-pause-200480                      kube-system
	
	
	==> coredns [ec3f0975fec3d919f90773255c0149ca9e9d19ae6f9ec9a6fb3defbc4471e7cf] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35353 - 46119 "HINFO IN 1146517426990655923.286383782588210196. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.150992619s
	
	
	==> describe nodes <==
	Name:               pause-200480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-200480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=pause-200480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_13_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-200480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:13:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-200480
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f111898f-6a32-4b0d-97a9-8bbcb9a6dfa5
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dpc7k                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-200480                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-s7b7r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-200480             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-200480    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-9t747                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-200480             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-200480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-200480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-200480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-200480 event: Registered Node pause-200480 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-200480 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 16 b3 d7 05 74 b5 08 06
	[ +20.912051] IPv4: martian source 10.244.0.1 from 10.244.0.53, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e b0 a7 e4 38 e4 08 06
	[Oct25 09:35] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.057046] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023954] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023909] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023917] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +4.031795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +8.447358] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[ +16.382923] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 09:36] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	
	
	==> etcd [a6c95c62c336d6d74920be9e94fef714f88e4bb1327664ce7a2283c01f3f72ce] <==
	{"level":"warn","ts":"2025-10-25T10:13:21.467752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.492033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.501394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.517017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.525596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.534059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.543672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.557694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.564466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.576643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.585719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.596660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.606782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.616134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.624429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.635259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.646561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.655689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.664756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.673843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.685399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.702139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.711353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.722122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.782904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:13:54 up  1:56,  0 user,  load average: 3.35, 1.81, 5.97
	Linux pause-200480 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e080b5d65c56bd1b04301a4db5b669a2ce749613037e2227c561a39e07d71b3a] <==
	I1025 10:13:30.953192       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:13:30.953507       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:13:30.953646       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:13:30.953660       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:13:30.953683       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:13:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:13:31.247039       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:13:31.247153       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:13:31.247286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1025 10:13:31.247556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:13:31.345577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:13:31.345872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:13:31.387987       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:13:31.445513       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 10:13:32.747879       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:13:32.747922       1 metrics.go:72] Registering metrics
	I1025 10:13:32.748004       1 controller.go:711] "Syncing nftables rules"
	I1025 10:13:41.247420       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:13:41.247534       1 main.go:301] handling current node
	I1025 10:13:51.251439       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:13:51.251494       1 main.go:301] handling current node
	
	
	==> kube-apiserver [74b63b63d97cbaf45ce6897ced783b4c3e4f98c71e66414df394bff0ac34580e] <==
	I1025 10:13:22.389058       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:13:22.389116       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:13:22.389138       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 10:13:22.394893       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:22.395066       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:13:22.400430       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:22.400631       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:13:22.580491       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:13:23.282901       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:13:23.286962       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:13:23.286984       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:13:23.821817       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:13:23.863990       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:13:23.988509       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:13:23.997052       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:13:23.998623       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:13:24.004356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:13:24.312867       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:13:25.041131       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:13:25.061143       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:13:25.070432       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:13:29.367735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:29.372095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:29.970241       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:13:30.264822       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [27db2729cf64a2e9b1d06ef82efdd2cec3eeb410d21ea6d1ed35c44ba965cd5a] <==
	I1025 10:13:29.282867       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:13:29.310901       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:13:29.310924       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:13:29.311040       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:13:29.311116       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:13:29.311396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:13:29.311427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:13:29.311494       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:13:29.311767       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:13:29.311796       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:13:29.311846       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:13:29.313033       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:13:29.313061       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:13:29.313192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:13:29.313353       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:13:29.316299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:13:29.316360       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:13:29.317565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:13:29.323795       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:13:29.326182       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:13:29.327283       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:13:29.331639       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:13:29.339105       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:13:29.346635       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:13:44.256672       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fe27241176bf76993884109c1cdc551c32fe9af9f43fbb3aeae01048d5b1e4bf] <==
	I1025 10:13:30.815922       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:13:30.924086       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:13:31.024987       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:13:31.025034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:13:31.025203       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:13:31.054336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:13:31.054405       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:13:31.061079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:13:31.061542       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:13:31.061572       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:13:31.063070       1 config.go:200] "Starting service config controller"
	I1025 10:13:31.063103       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:13:31.063262       1 config.go:309] "Starting node config controller"
	I1025 10:13:31.063273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:13:31.063280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:13:31.063544       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:13:31.063980       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:13:31.063661       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:13:31.064261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:13:31.163714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:13:31.165471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:13:31.166172       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a2bf1b0b7321a314961ca686d9983e6fcf281b2c4096cb1d82c060bdd8b0dc28] <==
	E1025 10:13:22.333003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:13:22.333123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:22.333161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:13:22.333206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:22.333226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:22.333307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:13:22.333379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:22.333411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:22.333459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:22.333710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:22.333766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:13:22.333906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:13:23.180597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:23.185749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:13:23.191922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:23.199346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:23.223517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:23.230684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:23.231677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:13:23.374695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:23.408868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:13:23.493491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 10:13:23.500524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:23.596421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1025 10:13:25.929836       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:13:25 pause-200480 kubelet[1314]: I1025 10:13:25.998104    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-200480" podStartSLOduration=0.99807812 podStartE2EDuration="998.07812ms" podCreationTimestamp="2025-10-25 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:25.986391464 +0000 UTC m=+1.175178220" watchObservedRunningTime="2025-10-25 10:13:25.99807812 +0000 UTC m=+1.186864877"
	Oct 25 10:13:26 pause-200480 kubelet[1314]: I1025 10:13:26.011099    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-200480" podStartSLOduration=1.011078172 podStartE2EDuration="1.011078172s" podCreationTimestamp="2025-10-25 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:25.99828082 +0000 UTC m=+1.187067578" watchObservedRunningTime="2025-10-25 10:13:26.011078172 +0000 UTC m=+1.199864987"
	Oct 25 10:13:26 pause-200480 kubelet[1314]: I1025 10:13:26.027575    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-200480" podStartSLOduration=1.027524831 podStartE2EDuration="1.027524831s" podCreationTimestamp="2025-10-25 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:26.01105352 +0000 UTC m=+1.199840277" watchObservedRunningTime="2025-10-25 10:13:26.027524831 +0000 UTC m=+1.216311588"
	Oct 25 10:13:29 pause-200480 kubelet[1314]: I1025 10:13:29.353689    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:13:29 pause-200480 kubelet[1314]: I1025 10:13:29.354418    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.337885    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-kube-proxy\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.337946    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-xtables-lock\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338048    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-lib-modules\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338090    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-lib-modules\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338120    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-cni-cfg\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338141    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-xtables-lock\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338177    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qkl\" (UniqueName: \"kubernetes.io/projected/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-kube-api-access-r2qkl\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338208    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcfs\" (UniqueName: \"kubernetes.io/projected/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-kube-api-access-7wcfs\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.988648    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9t747" podStartSLOduration=0.988619463 podStartE2EDuration="988.619463ms" podCreationTimestamp="2025-10-25 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:30.987995736 +0000 UTC m=+6.176782495" watchObservedRunningTime="2025-10-25 10:13:30.988619463 +0000 UTC m=+6.177406222"
	Oct 25 10:13:31 pause-200480 kubelet[1314]: I1025 10:13:31.005704    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s7b7r" podStartSLOduration=1.005677277 podStartE2EDuration="1.005677277s" podCreationTimestamp="2025-10-25 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:31.00543887 +0000 UTC m=+6.194225627" watchObservedRunningTime="2025-10-25 10:13:31.005677277 +0000 UTC m=+6.194464035"
	Oct 25 10:13:41 pause-200480 kubelet[1314]: I1025 10:13:41.342934    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:13:41 pause-200480 kubelet[1314]: I1025 10:13:41.413034    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1170a8f9-34c4-4475-8133-52cc6e952076-config-volume\") pod \"coredns-66bc5c9577-dpc7k\" (UID: \"1170a8f9-34c4-4475-8133-52cc6e952076\") " pod="kube-system/coredns-66bc5c9577-dpc7k"
	Oct 25 10:13:41 pause-200480 kubelet[1314]: I1025 10:13:41.413102    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2g7q\" (UniqueName: \"kubernetes.io/projected/1170a8f9-34c4-4475-8133-52cc6e952076-kube-api-access-n2g7q\") pod \"coredns-66bc5c9577-dpc7k\" (UID: \"1170a8f9-34c4-4475-8133-52cc6e952076\") " pod="kube-system/coredns-66bc5c9577-dpc7k"
	Oct 25 10:13:42 pause-200480 kubelet[1314]: I1025 10:13:42.008546    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dpc7k" podStartSLOduration=12.008520565 podStartE2EDuration="12.008520565s" podCreationTimestamp="2025-10-25 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:42.008417138 +0000 UTC m=+17.197203895" watchObservedRunningTime="2025-10-25 10:13:42.008520565 +0000 UTC m=+17.197307323"
	Oct 25 10:13:45 pause-200480 kubelet[1314]: W1025 10:13:45.195458    1314 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 25 10:13:45 pause-200480 kubelet[1314]: E1025 10:13:45.195586    1314 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 10:13:50 pause-200480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:13:50 pause-200480 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:13:50 pause-200480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:13:50 pause-200480 systemd[1]: kubelet.service: Consumed 1.234s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-200480 -n pause-200480
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-200480 -n pause-200480: exit status 2 (441.451071ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-200480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-200480
helpers_test.go:243: (dbg) docker inspect pause-200480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836",
	        "Created": "2025-10-25T10:13:04.732729023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:13:05.57937843Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/hosts",
	        "LogPath": "/var/lib/docker/containers/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836/b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836-json.log",
	        "Name": "/pause-200480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-200480:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-200480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0bf98b8c65868044fc0a5b47fe72a4ffcd4b0d3d85058a553b41455a6ab4836",
	                "LowerDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ab96fa301af71428557f091b88d4ae0e237d47661ebc6475228939c50809afb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-200480",
	                "Source": "/var/lib/docker/volumes/pause-200480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-200480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-200480",
	                "name.minikube.sigs.k8s.io": "pause-200480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a9c1b2227981efe0186de18e2155254d1788fc1de4bbd697b690cd8931c8f67",
	            "SandboxKey": "/var/run/docker/netns/8a9c1b222798",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-200480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:c8:36:c4:8d:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5b5f4eaeccab16064d966f84da86c61e47ae88cc4af80beaf17957998026d5d",
	                    "EndpointID": "6ac6d9b5aac6693236c2ec0efdc5d675693c820736a899643e66997f2663d32f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-200480",
	                        "b0bf98b8c658"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-200480 -n pause-200480
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-200480 -n pause-200480: exit status 2 (380.843456ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-200480 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-200480 logs -n 25: (1.155659437s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-514449 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:10 UTC │ 25 Oct 25 10:11 UTC │
	│ stop    │ -p scheduled-stop-514449 --schedule 5m                                                                                                   │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 5m                                                                                                   │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 5m                                                                                                   │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --cancel-scheduled                                                                                              │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │ 25 Oct 25 10:11 UTC │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │                     │
	│ stop    │ -p scheduled-stop-514449 --schedule 15s                                                                                                  │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:11 UTC │ 25 Oct 25 10:12 UTC │
	│ delete  │ -p scheduled-stop-514449                                                                                                                 │ scheduled-stop-514449       │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ start   │ -p insufficient-storage-591590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-591590 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ delete  │ -p insufficient-storage-591590                                                                                                           │ insufficient-storage-591590 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ start   │ -p offline-crio-169271 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-169271         │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p pause-200480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-200480                │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p stopped-upgrade-291164 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-291164      │ jenkins │ v1.32.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p missing-upgrade-363411 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-363411      │ jenkins │ v1.32.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ stop    │ stopped-upgrade-291164 stop                                                                                                              │ stopped-upgrade-291164      │ jenkins │ v1.32.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p stopped-upgrade-291164 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-291164      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ start   │ -p missing-upgrade-363411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-363411      │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ start   │ -p pause-200480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-200480                │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ delete  │ -p offline-crio-169271                                                                                                                   │ offline-crio-169271         │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-311859   │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ pause   │ -p pause-200480 --alsologtostderr -v=5                                                                                                   │ pause-200480                │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:13:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:13:48.270543  508860 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:48.270853  508860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:48.270865  508860 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:48.270869  508860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:48.271120  508860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:13:48.271679  508860 out.go:368] Setting JSON to false
	I1025 10:13:48.272781  508860 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6977,"bootTime":1761380251,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:13:48.272847  508860 start.go:141] virtualization: kvm guest
	I1025 10:13:48.275108  508860 out.go:179] * [kubernetes-upgrade-311859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:13:48.276924  508860 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:48.276922  508860 notify.go:220] Checking for updates...
	I1025 10:13:48.279638  508860 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:48.281163  508860 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:13:48.285509  508860 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:13:48.286920  508860 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:13:48.288189  508860 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:13:48.290069  508860 config.go:182] Loaded profile config "missing-upgrade-363411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 10:13:48.290201  508860 config.go:182] Loaded profile config "pause-200480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:48.290294  508860 config.go:182] Loaded profile config "stopped-upgrade-291164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 10:13:48.290431  508860 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:48.319205  508860 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:13:48.319341  508860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:48.394163  508860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-25 10:13:48.381687367 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:13:48.394264  508860 docker.go:318] overlay module found
	I1025 10:13:48.396345  508860 out.go:179] * Using the docker driver based on user configuration
	I1025 10:13:48.399111  508860 start.go:305] selected driver: docker
	I1025 10:13:48.399134  508860 start.go:925] validating driver "docker" against <nil>
	I1025 10:13:48.399149  508860 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:48.399944  508860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:48.466520  508860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-25 10:13:48.454470985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:13:48.466722  508860 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:13:48.466934  508860 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 10:13:48.468980  508860 out.go:179] * Using Docker driver with root privileges
	I1025 10:13:48.470410  508860 cni.go:84] Creating CNI manager for ""
	I1025 10:13:48.470472  508860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:13:48.470487  508860 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:13:48.470588  508860 start.go:349] cluster config:
	{Name:kubernetes-upgrade-311859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-311859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:48.472180  508860 out.go:179] * Starting "kubernetes-upgrade-311859" primary control-plane node in "kubernetes-upgrade-311859" cluster
	I1025 10:13:48.473379  508860 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:48.474894  508860 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:48.476362  508860 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:13:48.476397  508860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:48.476409  508860 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 10:13:48.476420  508860 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:48.476517  508860 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:13:48.476528  508860 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:13:48.476686  508860 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/config.json ...
	I1025 10:13:48.476710  508860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/config.json: {Name:mk44487fc2e94fa7be043d1f553824d3f3063775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.499967  508860 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:48.499991  508860 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:48.500010  508860 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:48.500052  508860 start.go:360] acquireMachinesLock for kubernetes-upgrade-311859: {Name:mk86159ddb1c244ee6d57a343afa2f3989c81171 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:48.500179  508860 start.go:364] duration metric: took 99.296µs to acquireMachinesLock for "kubernetes-upgrade-311859"
	I1025 10:13:48.500211  508860 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-311859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-311859 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:48.500293  508860 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:13:48.187465  506990 addons.go:514] duration metric: took 4.033091ms for enable addons: enabled=[]
	I1025 10:13:48.187510  506990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:48.307777  506990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:48.325131  506990 node_ready.go:35] waiting up to 6m0s for node "pause-200480" to be "Ready" ...
	I1025 10:13:48.334351  506990 node_ready.go:49] node "pause-200480" is "Ready"
	I1025 10:13:48.334385  506990 node_ready.go:38] duration metric: took 9.209665ms for node "pause-200480" to be "Ready" ...
	I1025 10:13:48.334404  506990 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:13:48.334467  506990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:48.352684  506990 api_server.go:72] duration metric: took 169.301508ms to wait for apiserver process to appear ...
	I1025 10:13:48.352727  506990 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:13:48.352750  506990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:13:48.358252  506990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:13:48.359310  506990 api_server.go:141] control plane version: v1.34.1
	I1025 10:13:48.359388  506990 api_server.go:131] duration metric: took 6.652432ms to wait for apiserver health ...
	I1025 10:13:48.359411  506990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:13:48.362905  506990 system_pods.go:59] 7 kube-system pods found
	I1025 10:13:48.362939  506990 system_pods.go:61] "coredns-66bc5c9577-dpc7k" [1170a8f9-34c4-4475-8133-52cc6e952076] Running
	I1025 10:13:48.362948  506990 system_pods.go:61] "etcd-pause-200480" [4c1da88c-b301-48c5-b38a-a44eed9e833d] Running
	I1025 10:13:48.362953  506990 system_pods.go:61] "kindnet-s7b7r" [d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237] Running
	I1025 10:13:48.362959  506990 system_pods.go:61] "kube-apiserver-pause-200480" [48b86f89-2f20-4480-9977-ecb83e7561ed] Running
	I1025 10:13:48.362964  506990 system_pods.go:61] "kube-controller-manager-pause-200480" [c1208adf-940e-434d-83bd-bc48516eea67] Running
	I1025 10:13:48.362969  506990 system_pods.go:61] "kube-proxy-9t747" [799bde2b-b5a9-41a7-a0d2-3651a174cf6f] Running
	I1025 10:13:48.362974  506990 system_pods.go:61] "kube-scheduler-pause-200480" [d1bbbb04-a730-4faa-87dc-a9c008d45697] Running
	I1025 10:13:48.362981  506990 system_pods.go:74] duration metric: took 3.55376ms to wait for pod list to return data ...
	I1025 10:13:48.362990  506990 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:13:48.365499  506990 default_sa.go:45] found service account: "default"
	I1025 10:13:48.365521  506990 default_sa.go:55] duration metric: took 2.524118ms for default service account to be created ...
	I1025 10:13:48.365531  506990 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:13:48.369387  506990 system_pods.go:86] 7 kube-system pods found
	I1025 10:13:48.369415  506990 system_pods.go:89] "coredns-66bc5c9577-dpc7k" [1170a8f9-34c4-4475-8133-52cc6e952076] Running
	I1025 10:13:48.369423  506990 system_pods.go:89] "etcd-pause-200480" [4c1da88c-b301-48c5-b38a-a44eed9e833d] Running
	I1025 10:13:48.369430  506990 system_pods.go:89] "kindnet-s7b7r" [d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237] Running
	I1025 10:13:48.369435  506990 system_pods.go:89] "kube-apiserver-pause-200480" [48b86f89-2f20-4480-9977-ecb83e7561ed] Running
	I1025 10:13:48.369455  506990 system_pods.go:89] "kube-controller-manager-pause-200480" [c1208adf-940e-434d-83bd-bc48516eea67] Running
	I1025 10:13:48.369465  506990 system_pods.go:89] "kube-proxy-9t747" [799bde2b-b5a9-41a7-a0d2-3651a174cf6f] Running
	I1025 10:13:48.369470  506990 system_pods.go:89] "kube-scheduler-pause-200480" [d1bbbb04-a730-4faa-87dc-a9c008d45697] Running
	I1025 10:13:48.369479  506990 system_pods.go:126] duration metric: took 3.940121ms to wait for k8s-apps to be running ...
	I1025 10:13:48.369491  506990 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:13:48.369654  506990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:13:48.387679  506990 system_svc.go:56] duration metric: took 18.178399ms WaitForService to wait for kubelet
	I1025 10:13:48.387713  506990 kubeadm.go:586] duration metric: took 204.335382ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:13:48.387743  506990 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:13:48.391209  506990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:13:48.391240  506990 node_conditions.go:123] node cpu capacity is 8
	I1025 10:13:48.391254  506990 node_conditions.go:105] duration metric: took 3.496857ms to run NodePressure ...
	I1025 10:13:48.391270  506990 start.go:241] waiting for startup goroutines ...
	I1025 10:13:48.391279  506990 start.go:246] waiting for cluster config update ...
	I1025 10:13:48.391288  506990 start.go:255] writing updated cluster config ...
	I1025 10:13:48.391678  506990 ssh_runner.go:195] Run: rm -f paused
	I1025 10:13:48.396037  506990 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:13:48.397409  506990 kapi.go:59] client config for pause-200480: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/pause-200480/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/pause-200480/client.key", CAFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:48.400951  506990 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dpc7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.406284  506990 pod_ready.go:94] pod "coredns-66bc5c9577-dpc7k" is "Ready"
	I1025 10:13:48.406311  506990 pod_ready.go:86] duration metric: took 5.333023ms for pod "coredns-66bc5c9577-dpc7k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.408694  506990 pod_ready.go:83] waiting for pod "etcd-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.413161  506990 pod_ready.go:94] pod "etcd-pause-200480" is "Ready"
	I1025 10:13:48.413182  506990 pod_ready.go:86] duration metric: took 4.460876ms for pod "etcd-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.415379  506990 pod_ready.go:83] waiting for pod "kube-apiserver-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.419308  506990 pod_ready.go:94] pod "kube-apiserver-pause-200480" is "Ready"
	I1025 10:13:48.419360  506990 pod_ready.go:86] duration metric: took 3.956028ms for pod "kube-apiserver-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.421581  506990 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:48.801662  506990 pod_ready.go:94] pod "kube-controller-manager-pause-200480" is "Ready"
	I1025 10:13:48.801687  506990 pod_ready.go:86] duration metric: took 380.080306ms for pod "kube-controller-manager-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:49.001155  506990 pod_ready.go:83] waiting for pod "kube-proxy-9t747" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:49.401380  506990 pod_ready.go:94] pod "kube-proxy-9t747" is "Ready"
	I1025 10:13:49.401416  506990 pod_ready.go:86] duration metric: took 400.232623ms for pod "kube-proxy-9t747" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:49.602486  506990 pod_ready.go:83] waiting for pod "kube-scheduler-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:50.000972  506990 pod_ready.go:94] pod "kube-scheduler-pause-200480" is "Ready"
	I1025 10:13:50.001011  506990 pod_ready.go:86] duration metric: took 398.490536ms for pod "kube-scheduler-pause-200480" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:13:50.001028  506990 pod_ready.go:40] duration metric: took 1.604953034s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:13:50.051489  506990 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:13:50.057755  506990 out.go:179] * Done! kubectl is now configured to use "pause-200480" cluster and "default" namespace by default
	I1025 10:13:47.717003  505930 cli_runner.go:164] Run: docker network inspect stopped-upgrade-291164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:47.736362  505930 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:47.740742  505930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:47.753563  505930 kubeadm.go:883] updating cluster {Name:stopped-upgrade-291164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-291164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:47.753694  505930 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 10:13:47.753745  505930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:47.798965  505930 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:47.798988  505930 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:13:47.799045  505930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:47.835189  505930 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:47.835210  505930 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:47.835218  505930 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.28.3 crio true true} ...
	I1025 10:13:47.835335  505930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-291164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-291164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:47.835401  505930 ssh_runner.go:195] Run: crio config
	I1025 10:13:47.894593  505930 cni.go:84] Creating CNI manager for ""
	I1025 10:13:47.894616  505930 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:13:47.894636  505930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:47.894659  505930 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-291164 NodeName:stopped-upgrade-291164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:47.894843  505930 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-291164"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:47.894912  505930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 10:13:47.905036  505930 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:47.905116  505930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:13:47.914811  505930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1025 10:13:47.935547  505930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:47.956610  505930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1025 10:13:47.978665  505930 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:47.982845  505930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:47.996095  505930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:48.089530  505930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:48.110985  505930 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164 for IP: 192.168.103.2
	I1025 10:13:48.111007  505930 certs.go:195] generating shared ca certs ...
	I1025 10:13:48.111027  505930 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.111173  505930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:13:48.111223  505930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:13:48.111238  505930 certs.go:257] generating profile certs ...
	I1025 10:13:48.111368  505930 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/client.key
	I1025 10:13:48.111405  505930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4
	I1025 10:13:48.111436  505930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 10:13:48.275665  505930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4 ...
	I1025 10:13:48.275692  505930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4: {Name:mk049fb9885555d7528df76ea858b38a88968b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.275868  505930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4 ...
	I1025 10:13:48.275887  505930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4: {Name:mk21276022fecb336ea9df82bf35550d89bcaf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.276010  505930 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt.e66b47c4 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt
	I1025 10:13:48.276167  505930 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key.e66b47c4 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key
	I1025 10:13:48.276326  505930 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/proxy-client.key
	I1025 10:13:48.276440  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:13:48.276467  505930 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:48.276478  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:48.276498  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:48.276520  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:48.276549  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:13:48.276600  505930 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:13:48.277391  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:48.307333  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:13:48.341481  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:48.379613  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:13:48.410893  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:13:48.448527  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:48.478208  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:48.506199  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:13:48.534433  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:48.562195  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:13:48.590008  505930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:13:48.617592  505930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:48.637143  505930 ssh_runner.go:195] Run: openssl version
	I1025 10:13:48.643375  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:48.655143  505930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:48.659278  505930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:48.659384  505930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:48.667144  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:48.678569  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:13:48.690100  505930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:13:48.694508  505930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:13:48.694569  505930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:13:48.704090  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:48.715686  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:13:48.726538  505930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:13:48.731788  505930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:13:48.731847  505930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:13:48.739136  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:48.749163  505930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:48.753352  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:48.761106  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:48.768168  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:48.775480  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:48.783485  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:48.790673  505930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:48.797475  505930 kubeadm.go:400] StartCluster: {Name:stopped-upgrade-291164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-291164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:48.797562  505930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:48.797610  505930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:48.841869  505930 cri.go:89] found id: ""
	I1025 10:13:48.841937  505930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1025 10:13:48.853914  505930 kubeadm.go:413] apiserver tunnel failed: apiserver port not set
	I1025 10:13:48.853949  505930 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:13:48.853960  505930 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:13:48.854009  505930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:13:48.864288  505930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:48.865157  505930 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-291164" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:13:48.865633  505930 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-291164" cluster setting kubeconfig missing "stopped-upgrade-291164" context setting]
	I1025 10:13:48.866195  505930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:48.866937  505930 kapi.go:59] client config for stopped-upgrade-291164: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/profiles/stopped-upgrade-291164/client.key", CAFile:"/home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:48.867394  505930 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:13:48.867417  505930 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:13:48.867430  505930 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:13:48.867436  505930 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:13:48.867441  505930 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:13:48.867790  505930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:13:48.878540  505930 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-25 10:13:27.961571632 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-25 10:13:47.975887500 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1025 10:13:48.878566  505930 kubeadm.go:1160] stopping kube-system containers ...
	I1025 10:13:48.878585  505930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 10:13:48.878644  505930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:48.920510  505930 cri.go:89] found id: ""
	I1025 10:13:48.920575  505930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 10:13:48.945967  505930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:13:48.957940  505930 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 25 10:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Oct 25 10:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 25 10:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 25 10:13 /etc/kubernetes/scheduler.conf
	
	I1025 10:13:48.958018  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1025 10:13:48.968394  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:48.968573  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:13:48.979629  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1025 10:13:48.990998  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:48.991066  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:13:49.003432  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1025 10:13:49.015050  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:49.015120  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:13:49.039852  505930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1025 10:13:49.050793  505930 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:49.050870  505930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:13:49.064547  505930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:13:49.074469  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:49.129921  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.013342  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.178067  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.258130  505930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 10:13:50.324835  505930 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:13:50.324917  505930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:50.825535  505930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:47.597440  506267 cli_runner.go:164] Run: docker container inspect missing-upgrade-363411 --format={{.State.Status}}
	W1025 10:13:47.618244  506267 cli_runner.go:211] docker container inspect missing-upgrade-363411 --format={{.State.Status}} returned with exit code 1
	I1025 10:13:47.618348  506267 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:47.618366  506267 oci.go:673] temporary error: container missing-upgrade-363411 status is  but expect it to be exited
	I1025 10:13:47.618419  506267 retry.go:31] will retry after 1.978580016s: couldn't verify container is exited. %v: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:49.598496  506267 cli_runner.go:164] Run: docker container inspect missing-upgrade-363411 --format={{.State.Status}}
	W1025 10:13:49.619686  506267 cli_runner.go:211] docker container inspect missing-upgrade-363411 --format={{.State.Status}} returned with exit code 1
	I1025 10:13:49.619768  506267 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:49.619782  506267 oci.go:673] temporary error: container missing-upgrade-363411 status is  but expect it to be exited
	I1025 10:13:49.619825  506267 retry.go:31] will retry after 3.772902502s: couldn't verify container is exited. %v: unknown state "missing-upgrade-363411": docker container inspect missing-upgrade-363411 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-363411
	I1025 10:13:48.503045  508860 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:13:48.503278  508860 start.go:159] libmachine.API.Create for "kubernetes-upgrade-311859" (driver="docker")
	I1025 10:13:48.503310  508860 client.go:168] LocalClient.Create starting
	I1025 10:13:48.503406  508860 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:13:48.503447  508860 main.go:141] libmachine: Decoding PEM data...
	I1025 10:13:48.503462  508860 main.go:141] libmachine: Parsing certificate...
	I1025 10:13:48.503517  508860 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:13:48.503543  508860 main.go:141] libmachine: Decoding PEM data...
	I1025 10:13:48.503554  508860 main.go:141] libmachine: Parsing certificate...
	I1025 10:13:48.503901  508860 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-311859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:13:48.523415  508860 cli_runner.go:211] docker network inspect kubernetes-upgrade-311859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:13:48.523505  508860 network_create.go:284] running [docker network inspect kubernetes-upgrade-311859] to gather additional debugging logs...
	I1025 10:13:48.523531  508860 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-311859
	W1025 10:13:48.541492  508860 cli_runner.go:211] docker network inspect kubernetes-upgrade-311859 returned with exit code 1
	I1025 10:13:48.541527  508860 network_create.go:287] error running [docker network inspect kubernetes-upgrade-311859]: docker network inspect kubernetes-upgrade-311859: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-311859 not found
	I1025 10:13:48.541546  508860 network_create.go:289] output of [docker network inspect kubernetes-upgrade-311859]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-311859 not found
	
	** /stderr **
	I1025 10:13:48.541677  508860 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:48.562102  508860 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:13:48.562824  508860 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:13:48.563419  508860 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:13:48.563840  508860 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d5b5f4eaecca IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:37:0b:e6:a3:44} reservation:<nil>}
	I1025 10:13:48.564629  508860 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001611790}
	I1025 10:13:48.564654  508860 network_create.go:124] attempt to create docker network kubernetes-upgrade-311859 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:13:48.564709  508860 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-311859 kubernetes-upgrade-311859
	I1025 10:13:48.631079  508860 network_create.go:108] docker network kubernetes-upgrade-311859 192.168.85.0/24 created
	I1025 10:13:48.631115  508860 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-311859" container
	I1025 10:13:48.631203  508860 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:13:48.651195  508860 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-311859 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-311859 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:13:48.671225  508860 oci.go:103] Successfully created a docker volume kubernetes-upgrade-311859
	I1025 10:13:48.671332  508860 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-311859-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-311859 --entrypoint /usr/bin/test -v kubernetes-upgrade-311859:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:13:49.090298  508860 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-311859
	I1025 10:13:49.090369  508860 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:13:49.090401  508860 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:13:49.090477  508860 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-311859:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:13:51.325735  505930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:13:51.342383  505930 api_server.go:72] duration metric: took 1.017555349s to wait for apiserver process to appear ...
	I1025 10:13:51.342416  505930 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:13:51.342442  505930 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 10:13:53.933548  505930 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:13:53.933580  505930 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:13:53.933597  505930 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 10:13:53.948957  505930 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:13:53.948992  505930 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:13:54.342548  505930 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 10:13:54.347026  505930 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 10:13:54.347060  505930 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 10:13:54.843524  505930 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 10:13:54.848478  505930 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 10:13:54.848504  505930 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 10:13:55.343284  505930 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 10:13:55.350926  505930 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 10:13:55.362438  505930 api_server.go:141] control plane version: v1.28.3
	I1025 10:13:55.362661  505930 api_server.go:131] duration metric: took 4.020067287s to wait for apiserver health ...
	I1025 10:13:55.362686  505930 cni.go:84] Creating CNI manager for ""
	I1025 10:13:55.362695  505930 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:13:55.364312  505930 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.800937615Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.801813723Z" level=info msg="Conmon does support the --sync option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.801831534Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.801845411Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.802534359Z" level=info msg="Conmon does support the --sync option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.802548679Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.80662803Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.806665892Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.807433152Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.807856425Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.807910306Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.814017268Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.856574285Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-dpc7k Namespace:kube-system ID:34c206069e045b90b172264f76344fb2fe7adf569e64d0ac5d78f45644d44541 UID:1170a8f9-34c4-4475-8133-52cc6e952076 NetNS:/var/run/netns/ac65af99-b58a-4cd9-914d-a34bf8be9107 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a318}] Aliases:map[]}"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.8567783Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-dpc7k for CNI network kindnet (type=ptp)"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.85776868Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.857822448Z" level=info msg="Starting seccomp notifier watcher"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.857950633Z" level=info msg="Create NRI interface"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858212689Z" level=info msg="built-in NRI default validator is disabled"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858225972Z" level=info msg="runtime interface created"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858247823Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858256525Z" level=info msg="runtime interface starting up..."
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858266769Z" level=info msg="starting plugins..."
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.858289552Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 25 10:13:46 pause-200480 crio[2171]: time="2025-10-25T10:13:46.859204627Z" level=info msg="No systemd watchdog enabled"
	Oct 25 10:13:46 pause-200480 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ec3f0975fec3d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   34c206069e045       coredns-66bc5c9577-dpc7k               kube-system
	fe27241176bf7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   4c1a0ea09c9ce       kube-proxy-9t747                       kube-system
	e080b5d65c56b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   5dd0e5d32784a       kindnet-s7b7r                          kube-system
	a2bf1b0b7321a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   19a333cb1dce3       kube-scheduler-pause-200480            kube-system
	27db2729cf64a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   52c9e2175a8d7       kube-controller-manager-pause-200480   kube-system
	74b63b63d97cb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   dfe645817e199       kube-apiserver-pause-200480            kube-system
	a6c95c62c336d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   e53a0fb3de7e9       etcd-pause-200480                      kube-system
	
	
	==> coredns [ec3f0975fec3d919f90773255c0149ca9e9d19ae6f9ec9a6fb3defbc4471e7cf] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35353 - 46119 "HINFO IN 1146517426990655923.286383782588210196. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.150992619s
	
	
	==> describe nodes <==
	Name:               pause-200480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-200480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=pause-200480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_13_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-200480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:13:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:13:45 +0000   Sat, 25 Oct 2025 10:13:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-200480
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f111898f-6a32-4b0d-97a9-8bbcb9a6dfa5
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dpc7k                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-200480                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-s7b7r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-200480             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-200480    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-9t747                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-200480             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-200480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-200480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-200480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-200480 event: Registered Node pause-200480 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-200480 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 16 b3 d7 05 74 b5 08 06
	[ +20.912051] IPv4: martian source 10.244.0.1 from 10.244.0.53, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6e b0 a7 e4 38 e4 08 06
	[Oct25 09:35] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.057046] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023954] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023909] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023917] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +1.023908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +2.047808] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +4.031795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[  +8.447358] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[ +16.382923] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 09:36] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	
	
	==> etcd [a6c95c62c336d6d74920be9e94fef714f88e4bb1327664ce7a2283c01f3f72ce] <==
	{"level":"warn","ts":"2025-10-25T10:13:21.467752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.492033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.501394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.517017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.525596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.534059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.543672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.557694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.564466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.576643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.585719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.596660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.606782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.616134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.624429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.635259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.646561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.655689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.664756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.673843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.685399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.702139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.711353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.722122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:13:21.782904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:13:56 up  1:56,  0 user,  load average: 3.35, 1.81, 5.97
	Linux pause-200480 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e080b5d65c56bd1b04301a4db5b669a2ce749613037e2227c561a39e07d71b3a] <==
	I1025 10:13:30.953192       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:13:30.953507       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:13:30.953646       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:13:30.953660       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:13:30.953683       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:13:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:13:31.247039       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:13:31.247153       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:13:31.247286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1025 10:13:31.247556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:13:31.345577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:13:31.345872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:13:31.387987       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:13:31.445513       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 10:13:32.747879       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:13:32.747922       1 metrics.go:72] Registering metrics
	I1025 10:13:32.748004       1 controller.go:711] "Syncing nftables rules"
	I1025 10:13:41.247420       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:13:41.247534       1 main.go:301] handling current node
	I1025 10:13:51.251439       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:13:51.251494       1 main.go:301] handling current node
	
	
	==> kube-apiserver [74b63b63d97cbaf45ce6897ced783b4c3e4f98c71e66414df394bff0ac34580e] <==
	I1025 10:13:22.389058       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:13:22.389116       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:13:22.389138       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 10:13:22.394893       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:22.395066       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:13:22.400430       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:22.400631       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:13:22.580491       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:13:23.282901       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:13:23.286962       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:13:23.286984       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:13:23.821817       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:13:23.863990       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:13:23.988509       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:13:23.997052       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:13:23.998623       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:13:24.004356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:13:24.312867       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:13:25.041131       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:13:25.061143       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:13:25.070432       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:13:29.367735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:29.372095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:13:29.970241       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:13:30.264822       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [27db2729cf64a2e9b1d06ef82efdd2cec3eeb410d21ea6d1ed35c44ba965cd5a] <==
	I1025 10:13:29.282867       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:13:29.310901       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:13:29.310924       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:13:29.311040       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:13:29.311116       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:13:29.311396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:13:29.311427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:13:29.311494       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:13:29.311767       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:13:29.311796       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:13:29.311846       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:13:29.313033       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:13:29.313061       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:13:29.313192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:13:29.313353       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:13:29.316299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:13:29.316360       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:13:29.317565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:13:29.323795       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:13:29.326182       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:13:29.327283       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:13:29.331639       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:13:29.339105       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:13:29.346635       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:13:44.256672       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fe27241176bf76993884109c1cdc551c32fe9af9f43fbb3aeae01048d5b1e4bf] <==
	I1025 10:13:30.815922       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:13:30.924086       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:13:31.024987       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:13:31.025034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:13:31.025203       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:13:31.054336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:13:31.054405       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:13:31.061079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:13:31.061542       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:13:31.061572       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:13:31.063070       1 config.go:200] "Starting service config controller"
	I1025 10:13:31.063103       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:13:31.063262       1 config.go:309] "Starting node config controller"
	I1025 10:13:31.063273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:13:31.063280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:13:31.063544       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:13:31.063980       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:13:31.063661       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:13:31.064261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:13:31.163714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:13:31.165471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:13:31.166172       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a2bf1b0b7321a314961ca686d9983e6fcf281b2c4096cb1d82c060bdd8b0dc28] <==
	E1025 10:13:22.333003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:13:22.333123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:22.333161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:13:22.333206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:22.333226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:22.333307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:13:22.333379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:22.333411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:22.333459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:22.333710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:22.333766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:13:22.333906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:13:23.180597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:23.185749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:13:23.191922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:23.199346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:23.223517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:23.230684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:23.231677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:13:23.374695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:23.408868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:13:23.493491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 10:13:23.500524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:23.596421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1025 10:13:25.929836       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:13:25 pause-200480 kubelet[1314]: I1025 10:13:25.998104    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-200480" podStartSLOduration=0.99807812 podStartE2EDuration="998.07812ms" podCreationTimestamp="2025-10-25 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:25.986391464 +0000 UTC m=+1.175178220" watchObservedRunningTime="2025-10-25 10:13:25.99807812 +0000 UTC m=+1.186864877"
	Oct 25 10:13:26 pause-200480 kubelet[1314]: I1025 10:13:26.011099    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-200480" podStartSLOduration=1.011078172 podStartE2EDuration="1.011078172s" podCreationTimestamp="2025-10-25 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:25.99828082 +0000 UTC m=+1.187067578" watchObservedRunningTime="2025-10-25 10:13:26.011078172 +0000 UTC m=+1.199864987"
	Oct 25 10:13:26 pause-200480 kubelet[1314]: I1025 10:13:26.027575    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-200480" podStartSLOduration=1.027524831 podStartE2EDuration="1.027524831s" podCreationTimestamp="2025-10-25 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:26.01105352 +0000 UTC m=+1.199840277" watchObservedRunningTime="2025-10-25 10:13:26.027524831 +0000 UTC m=+1.216311588"
	Oct 25 10:13:29 pause-200480 kubelet[1314]: I1025 10:13:29.353689    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:13:29 pause-200480 kubelet[1314]: I1025 10:13:29.354418    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.337885    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-kube-proxy\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.337946    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-xtables-lock\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338048    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-lib-modules\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338090    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-lib-modules\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338120    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-cni-cfg\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338141    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-xtables-lock\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338177    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qkl\" (UniqueName: \"kubernetes.io/projected/799bde2b-b5a9-41a7-a0d2-3651a174cf6f-kube-api-access-r2qkl\") pod \"kube-proxy-9t747\" (UID: \"799bde2b-b5a9-41a7-a0d2-3651a174cf6f\") " pod="kube-system/kube-proxy-9t747"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.338208    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcfs\" (UniqueName: \"kubernetes.io/projected/d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237-kube-api-access-7wcfs\") pod \"kindnet-s7b7r\" (UID: \"d5d2bdb5-f9ae-4593-a6cb-d2d0063f7237\") " pod="kube-system/kindnet-s7b7r"
	Oct 25 10:13:30 pause-200480 kubelet[1314]: I1025 10:13:30.988648    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9t747" podStartSLOduration=0.988619463 podStartE2EDuration="988.619463ms" podCreationTimestamp="2025-10-25 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:30.987995736 +0000 UTC m=+6.176782495" watchObservedRunningTime="2025-10-25 10:13:30.988619463 +0000 UTC m=+6.177406222"
	Oct 25 10:13:31 pause-200480 kubelet[1314]: I1025 10:13:31.005704    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s7b7r" podStartSLOduration=1.005677277 podStartE2EDuration="1.005677277s" podCreationTimestamp="2025-10-25 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:31.00543887 +0000 UTC m=+6.194225627" watchObservedRunningTime="2025-10-25 10:13:31.005677277 +0000 UTC m=+6.194464035"
	Oct 25 10:13:41 pause-200480 kubelet[1314]: I1025 10:13:41.342934    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:13:41 pause-200480 kubelet[1314]: I1025 10:13:41.413034    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1170a8f9-34c4-4475-8133-52cc6e952076-config-volume\") pod \"coredns-66bc5c9577-dpc7k\" (UID: \"1170a8f9-34c4-4475-8133-52cc6e952076\") " pod="kube-system/coredns-66bc5c9577-dpc7k"
	Oct 25 10:13:41 pause-200480 kubelet[1314]: I1025 10:13:41.413102    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2g7q\" (UniqueName: \"kubernetes.io/projected/1170a8f9-34c4-4475-8133-52cc6e952076-kube-api-access-n2g7q\") pod \"coredns-66bc5c9577-dpc7k\" (UID: \"1170a8f9-34c4-4475-8133-52cc6e952076\") " pod="kube-system/coredns-66bc5c9577-dpc7k"
	Oct 25 10:13:42 pause-200480 kubelet[1314]: I1025 10:13:42.008546    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dpc7k" podStartSLOduration=12.008520565 podStartE2EDuration="12.008520565s" podCreationTimestamp="2025-10-25 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:13:42.008417138 +0000 UTC m=+17.197203895" watchObservedRunningTime="2025-10-25 10:13:42.008520565 +0000 UTC m=+17.197307323"
	Oct 25 10:13:45 pause-200480 kubelet[1314]: W1025 10:13:45.195458    1314 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 25 10:13:45 pause-200480 kubelet[1314]: E1025 10:13:45.195586    1314 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 10:13:50 pause-200480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:13:50 pause-200480 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:13:50 pause-200480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:13:50 pause-200480 systemd[1]: kubelet.service: Consumed 1.234s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-200480 -n pause-200480
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-200480 -n pause-200480: exit status 2 (377.136881ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-200480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (321.392622ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-714798 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-714798 describe deploy/metrics-server -n kube-system: exit status 1 (71.569424ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-714798 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-714798
helpers_test.go:243: (dbg) docker inspect old-k8s-version-714798:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb",
	        "Created": "2025-10-25T10:19:03.747366257Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 596792,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:19:03.810039542Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/hosts",
	        "LogPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb-json.log",
	        "Name": "/old-k8s-version-714798",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-714798:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-714798",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb",
	                "LowerDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-714798",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-714798/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-714798",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-714798",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-714798",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cdf03c27cc1b07e29b5faa76c3896533a2e57d42f67ba87e2830bb2ce71987bc",
	            "SandboxKey": "/var/run/docker/netns/cdf03c27cc1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-714798": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:36:80:84:c5:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc93092e09ae8d654ec66b5e009efa3952011514f4834e7a4c9ac844956e7c64",
	                    "EndpointID": "2aa414c516f0b6b4e6ec2f7625944705c69f89d011b5a302421707a667a03538",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-714798",
	                        "0ea7bd002b13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-714798 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-714798 logs -n 25: (1.401189673s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-119085 sudo systemctl status kubelet --all --full --no-pager                                                                      │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat docker --no-pager                                                                                       │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo docker system info                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cri-dockerd --version                                                                                                 │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status containerd --all --full --no-pager                                                                   │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat containerd --no-pager                                                                                   │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /lib/systemd/system/containerd.service                                                                            │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/containerd/config.toml                                                                                       │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo containerd config dump                                                                                                │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status crio --all --full --no-pager                                                                         │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat crio --no-pager                                                                                         │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo crio config                                                                                                           │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ delete  │ -p flannel-119085                                                                                                                            │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:19:50
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:19:50.258369  613485 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:19:50.258698  613485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:19:50.258711  613485 out.go:374] Setting ErrFile to fd 2...
	I1025 10:19:50.258716  613485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:19:50.259077  613485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:19:50.259729  613485 out.go:368] Setting JSON to false
	I1025 10:19:50.261434  613485 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7339,"bootTime":1761380251,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:19:50.261587  613485 start.go:141] virtualization: kvm guest
	I1025 10:19:50.264130  613485 out.go:179] * [default-k8s-diff-port-767846] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:19:50.265637  613485 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:19:50.265638  613485 notify.go:220] Checking for updates...
	I1025 10:19:50.266996  613485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:19:50.268682  613485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:19:50.270167  613485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:19:50.273553  613485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:19:50.275541  613485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:19:50.277497  613485 config.go:182] Loaded profile config "flannel-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:19:50.277645  613485 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:19:50.277760  613485 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:19:50.277881  613485 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:19:50.307150  613485 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:19:50.307359  613485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:19:50.389525  613485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:92 SystemTime:2025-10-25 10:19:50.375894074 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:19:50.389665  613485 docker.go:318] overlay module found
	I1025 10:19:50.392073  613485 out.go:179] * Using the docker driver based on user configuration
	I1025 10:19:50.393470  613485 start.go:305] selected driver: docker
	I1025 10:19:50.393491  613485 start.go:925] validating driver "docker" against <nil>
	I1025 10:19:50.393506  613485 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:19:50.394381  613485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:19:50.481713  613485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-25 10:19:50.469040378 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:19:50.481924  613485 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:19:50.482228  613485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:19:50.484338  613485 out.go:179] * Using Docker driver with root privileges
	I1025 10:19:50.485781  613485 cni.go:84] Creating CNI manager for ""
	I1025 10:19:50.485858  613485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:19:50.485874  613485 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:19:50.485968  613485 start.go:349] cluster config:
	{Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:19:50.487508  613485 out.go:179] * Starting "default-k8s-diff-port-767846" primary control-plane node in "default-k8s-diff-port-767846" cluster
	I1025 10:19:50.489176  613485 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:19:50.491081  613485 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:19:50.492464  613485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:19:50.492495  613485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:19:50.492523  613485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:19:50.492539  613485 cache.go:58] Caching tarball of preloaded images
	I1025 10:19:50.492670  613485 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:19:50.492690  613485 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:19:50.492836  613485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/config.json ...
	I1025 10:19:50.492869  613485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/config.json: {Name:mk19f65663fb53332930464431a9a6bd74d576ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:19:50.517986  613485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:19:50.518018  613485 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:19:50.518040  613485 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:19:50.518084  613485 start.go:360] acquireMachinesLock for default-k8s-diff-port-767846: {Name:mkfce83ea9c2f2735b28d97963bf8e1ce130c344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:19:50.518201  613485 start.go:364] duration metric: took 94.489µs to acquireMachinesLock for "default-k8s-diff-port-767846"
	I1025 10:19:50.518230  613485 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:19:50.518333  613485 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:19:46.999452  604413 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:19:47.295335  604413 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:19:47.295525  604413 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-899665] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:19:47.472342  604413 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:19:47.472554  604413 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-899665] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:19:47.597255  604413 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:19:47.829625  604413 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:19:47.963623  604413 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:19:47.963712  604413 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:19:48.647052  604413 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:19:48.865856  604413 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:19:49.050129  604413 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:19:49.609420  604413 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:19:50.580295  604413 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:19:50.580955  604413 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:19:50.587664  604413 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:19:48.514592  595831 node_ready.go:49] node "old-k8s-version-714798" is "Ready"
	I1025 10:19:48.514628  595831 node_ready.go:38] duration metric: took 13.011011465s for node "old-k8s-version-714798" to be "Ready" ...
	I1025 10:19:48.514655  595831 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:19:48.514739  595831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:19:48.534606  595831 api_server.go:72] duration metric: took 13.566508515s to wait for apiserver process to appear ...
	I1025 10:19:48.534639  595831 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:19:48.534663  595831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:19:48.541362  595831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:19:48.542899  595831 api_server.go:141] control plane version: v1.28.0
	I1025 10:19:48.542932  595831 api_server.go:131] duration metric: took 8.28477ms to wait for apiserver health ...
	I1025 10:19:48.542943  595831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:19:48.548033  595831 system_pods.go:59] 8 kube-system pods found
	I1025 10:19:48.548099  595831 system_pods.go:61] "coredns-5dd5756b68-k5644" [2c88bd24-b8f1-44bf-83de-2052b4b210fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:19:48.548111  595831 system_pods.go:61] "etcd-old-k8s-version-714798" [48cdd11f-6f4c-4be7-8d76-a775fc48fd2a] Running
	I1025 10:19:48.548120  595831 system_pods.go:61] "kindnet-g9r7c" [b38a2108-5fba-42dd-82ea-22ed6eafbe86] Running
	I1025 10:19:48.548138  595831 system_pods.go:61] "kube-apiserver-old-k8s-version-714798" [242fcfd5-365c-4c41-929d-90171efa0609] Running
	I1025 10:19:48.548143  595831 system_pods.go:61] "kube-controller-manager-old-k8s-version-714798" [1c0e41d0-1bf8-4361-a207-cc6aee4d0b19] Running
	I1025 10:19:48.548149  595831 system_pods.go:61] "kube-proxy-kqg7q" [e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b] Running
	I1025 10:19:48.548154  595831 system_pods.go:61] "kube-scheduler-old-k8s-version-714798" [aafd7481-3533-4614-84e1-1bd872d1f812] Running
	I1025 10:19:48.548161  595831 system_pods.go:61] "storage-provisioner" [fa27e0de-acda-44a9-a974-7abe0a4c94df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:19:48.548171  595831 system_pods.go:74] duration metric: took 5.219239ms to wait for pod list to return data ...
	I1025 10:19:48.548182  595831 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:19:48.551098  595831 default_sa.go:45] found service account: "default"
	I1025 10:19:48.551126  595831 default_sa.go:55] duration metric: took 2.928093ms for default service account to be created ...
	I1025 10:19:48.551139  595831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:19:48.555234  595831 system_pods.go:86] 8 kube-system pods found
	I1025 10:19:48.555267  595831 system_pods.go:89] "coredns-5dd5756b68-k5644" [2c88bd24-b8f1-44bf-83de-2052b4b210fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:19:48.555275  595831 system_pods.go:89] "etcd-old-k8s-version-714798" [48cdd11f-6f4c-4be7-8d76-a775fc48fd2a] Running
	I1025 10:19:48.555283  595831 system_pods.go:89] "kindnet-g9r7c" [b38a2108-5fba-42dd-82ea-22ed6eafbe86] Running
	I1025 10:19:48.555290  595831 system_pods.go:89] "kube-apiserver-old-k8s-version-714798" [242fcfd5-365c-4c41-929d-90171efa0609] Running
	I1025 10:19:48.555296  595831 system_pods.go:89] "kube-controller-manager-old-k8s-version-714798" [1c0e41d0-1bf8-4361-a207-cc6aee4d0b19] Running
	I1025 10:19:48.555301  595831 system_pods.go:89] "kube-proxy-kqg7q" [e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b] Running
	I1025 10:19:48.555306  595831 system_pods.go:89] "kube-scheduler-old-k8s-version-714798" [aafd7481-3533-4614-84e1-1bd872d1f812] Running
	I1025 10:19:48.555348  595831 system_pods.go:89] "storage-provisioner" [fa27e0de-acda-44a9-a974-7abe0a4c94df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:19:48.555461  595831 retry.go:31] will retry after 310.600843ms: missing components: kube-dns
	I1025 10:19:48.871585  595831 system_pods.go:86] 8 kube-system pods found
	I1025 10:19:48.871624  595831 system_pods.go:89] "coredns-5dd5756b68-k5644" [2c88bd24-b8f1-44bf-83de-2052b4b210fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:19:48.871632  595831 system_pods.go:89] "etcd-old-k8s-version-714798" [48cdd11f-6f4c-4be7-8d76-a775fc48fd2a] Running
	I1025 10:19:48.871641  595831 system_pods.go:89] "kindnet-g9r7c" [b38a2108-5fba-42dd-82ea-22ed6eafbe86] Running
	I1025 10:19:48.871649  595831 system_pods.go:89] "kube-apiserver-old-k8s-version-714798" [242fcfd5-365c-4c41-929d-90171efa0609] Running
	I1025 10:19:48.871656  595831 system_pods.go:89] "kube-controller-manager-old-k8s-version-714798" [1c0e41d0-1bf8-4361-a207-cc6aee4d0b19] Running
	I1025 10:19:48.871661  595831 system_pods.go:89] "kube-proxy-kqg7q" [e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b] Running
	I1025 10:19:48.871666  595831 system_pods.go:89] "kube-scheduler-old-k8s-version-714798" [aafd7481-3533-4614-84e1-1bd872d1f812] Running
	I1025 10:19:48.871673  595831 system_pods.go:89] "storage-provisioner" [fa27e0de-acda-44a9-a974-7abe0a4c94df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:19:48.871698  595831 retry.go:31] will retry after 253.41393ms: missing components: kube-dns
	I1025 10:19:49.451415  595831 system_pods.go:86] 8 kube-system pods found
	I1025 10:19:49.451449  595831 system_pods.go:89] "coredns-5dd5756b68-k5644" [2c88bd24-b8f1-44bf-83de-2052b4b210fc] Running
	I1025 10:19:49.451456  595831 system_pods.go:89] "etcd-old-k8s-version-714798" [48cdd11f-6f4c-4be7-8d76-a775fc48fd2a] Running
	I1025 10:19:49.451460  595831 system_pods.go:89] "kindnet-g9r7c" [b38a2108-5fba-42dd-82ea-22ed6eafbe86] Running
	I1025 10:19:49.451465  595831 system_pods.go:89] "kube-apiserver-old-k8s-version-714798" [242fcfd5-365c-4c41-929d-90171efa0609] Running
	I1025 10:19:49.451472  595831 system_pods.go:89] "kube-controller-manager-old-k8s-version-714798" [1c0e41d0-1bf8-4361-a207-cc6aee4d0b19] Running
	I1025 10:19:49.451477  595831 system_pods.go:89] "kube-proxy-kqg7q" [e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b] Running
	I1025 10:19:49.451482  595831 system_pods.go:89] "kube-scheduler-old-k8s-version-714798" [aafd7481-3533-4614-84e1-1bd872d1f812] Running
	I1025 10:19:49.451487  595831 system_pods.go:89] "storage-provisioner" [fa27e0de-acda-44a9-a974-7abe0a4c94df] Running
	I1025 10:19:49.451502  595831 system_pods.go:126] duration metric: took 900.355024ms to wait for k8s-apps to be running ...
	I1025 10:19:49.451515  595831 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:19:49.451574  595831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:19:49.466218  595831 system_svc.go:56] duration metric: took 14.693606ms WaitForService to wait for kubelet
	I1025 10:19:49.466256  595831 kubeadm.go:586] duration metric: took 14.498165136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:19:49.466280  595831 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:19:49.512137  595831 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:19:49.512173  595831 node_conditions.go:123] node cpu capacity is 8
	I1025 10:19:49.512187  595831 node_conditions.go:105] duration metric: took 45.900495ms to run NodePressure ...
	I1025 10:19:49.512203  595831 start.go:241] waiting for startup goroutines ...
	I1025 10:19:49.512212  595831 start.go:246] waiting for cluster config update ...
	I1025 10:19:49.512230  595831 start.go:255] writing updated cluster config ...
	I1025 10:19:49.513378  595831 ssh_runner.go:195] Run: rm -f paused
	I1025 10:19:49.518249  595831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:19:49.522917  595831 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.528252  595831 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:19:49.528281  595831 pod_ready.go:86] duration metric: took 5.3382ms for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.531206  595831 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.536967  595831 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:19:49.536992  595831 pod_ready.go:86] duration metric: took 5.764583ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.540448  595831 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.545985  595831 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:19:49.546014  595831 pod_ready.go:86] duration metric: took 5.539707ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.553978  595831 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:49.923893  595831 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:19:49.923924  595831 pod_ready.go:86] duration metric: took 369.899793ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:50.123243  595831 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:50.522917  595831 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:19:50.522946  595831 pod_ready.go:86] duration metric: took 399.672566ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:50.723879  595831 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:51.122422  595831 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:19:51.122454  595831 pod_ready.go:86] duration metric: took 398.543741ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:19:51.122468  595831 pod_ready.go:40] duration metric: took 1.604182143s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:19:51.190565  595831 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:19:51.193223  595831 out.go:203] 
	W1025 10:19:51.194713  595831 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:19:51.196186  595831 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:19:51.197957  595831 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:19:50.589540  604413 out.go:252]   - Booting up control plane ...
	I1025 10:19:50.589681  604413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:19:50.589784  604413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:19:50.589960  604413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:19:50.608176  604413 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:19:50.608402  604413 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:19:50.617181  604413 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:19:50.617557  604413 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:19:50.617658  604413 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:19:50.756078  604413 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:19:50.756244  604413 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:19:50.521280  613485 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:19:50.521607  613485 start.go:159] libmachine.API.Create for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:19:50.521645  613485 client.go:168] LocalClient.Create starting
	I1025 10:19:50.521743  613485 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:19:50.521783  613485 main.go:141] libmachine: Decoding PEM data...
	I1025 10:19:50.521807  613485 main.go:141] libmachine: Parsing certificate...
	I1025 10:19:50.521900  613485 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:19:50.521932  613485 main.go:141] libmachine: Decoding PEM data...
	I1025 10:19:50.521943  613485 main.go:141] libmachine: Parsing certificate...
	I1025 10:19:50.522400  613485 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:19:50.541573  613485 cli_runner.go:211] docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:19:50.541654  613485 network_create.go:284] running [docker network inspect default-k8s-diff-port-767846] to gather additional debugging logs...
	I1025 10:19:50.541688  613485 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846
	W1025 10:19:50.560135  613485 cli_runner.go:211] docker network inspect default-k8s-diff-port-767846 returned with exit code 1
	I1025 10:19:50.560167  613485 network_create.go:287] error running [docker network inspect default-k8s-diff-port-767846]: docker network inspect default-k8s-diff-port-767846: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-767846 not found
	I1025 10:19:50.560180  613485 network_create.go:289] output of [docker network inspect default-k8s-diff-port-767846]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-767846 not found
	
	** /stderr **
	I1025 10:19:50.560272  613485 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:19:50.581003  613485 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:19:50.581882  613485 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:19:50.582593  613485 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:19:50.583127  613485 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:19:50.583592  613485 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:19:50.584012  613485 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0e52abe99641 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:5f:3c:49:72:70} reservation:<nil>}
	I1025 10:19:50.584701  613485 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018db480}
	I1025 10:19:50.584730  613485 network_create.go:124] attempt to create docker network default-k8s-diff-port-767846 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 10:19:50.584781  613485 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-767846 default-k8s-diff-port-767846
	I1025 10:19:50.665482  613485 network_create.go:108] docker network default-k8s-diff-port-767846 192.168.103.0/24 created
	I1025 10:19:50.665524  613485 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-767846" container
	I1025 10:19:50.665601  613485 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:19:50.685105  613485 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-767846 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-767846 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:19:50.706886  613485 oci.go:103] Successfully created a docker volume default-k8s-diff-port-767846
	I1025 10:19:50.707016  613485 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-767846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-767846 --entrypoint /usr/bin/test -v default-k8s-diff-port-767846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:19:51.178349  613485 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-767846
	I1025 10:19:51.178395  613485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:19:51.178422  613485 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:19:51.178502  613485 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-767846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:19:52.257897  604413 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502096658s
	I1025 10:19:52.262996  604413 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:19:52.263161  604413 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:19:52.263292  604413 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:19:52.263550  604413 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:19:53.611887  604413 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.348752197s
	I1025 10:19:54.657842  604413 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.394717882s
	I1025 10:19:57.265025  604413 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.001849666s
	I1025 10:19:57.280632  604413 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:19:57.295843  604413 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:19:57.312434  604413 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:19:57.313228  604413 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-899665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:19:57.323883  604413 kubeadm.go:318] [bootstrap-token] Using token: 3k8t0l.jwnokuxogkhw7eil
	I1025 10:19:57.325576  604413 out.go:252]   - Configuring RBAC rules ...
	I1025 10:19:57.325786  604413 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:19:57.334634  604413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:19:57.344030  604413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:19:57.350164  604413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:19:57.356283  604413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:19:57.361413  604413 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:19:57.672060  604413 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:19:58.097336  604413 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:19:58.672992  604413 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:19:58.673863  604413 kubeadm.go:318] 
	I1025 10:19:58.673972  604413 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:19:58.673984  604413 kubeadm.go:318] 
	I1025 10:19:58.674065  604413 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:19:58.674075  604413 kubeadm.go:318] 
	I1025 10:19:58.674111  604413 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:19:58.674203  604413 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:19:58.674282  604413 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:19:58.674292  604413 kubeadm.go:318] 
	I1025 10:19:58.674428  604413 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:19:58.674453  604413 kubeadm.go:318] 
	I1025 10:19:58.674530  604413 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:19:58.674539  604413 kubeadm.go:318] 
	I1025 10:19:58.674620  604413 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:19:58.674727  604413 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:19:58.674829  604413 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:19:58.674843  604413 kubeadm.go:318] 
	I1025 10:19:58.674986  604413 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:19:58.675077  604413 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:19:58.675085  604413 kubeadm.go:318] 
	I1025 10:19:58.675216  604413 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 3k8t0l.jwnokuxogkhw7eil \
	I1025 10:19:58.675408  604413 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:19:58.675444  604413 kubeadm.go:318] 	--control-plane 
	I1025 10:19:58.675453  604413 kubeadm.go:318] 
	I1025 10:19:58.675609  604413 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:19:58.675620  604413 kubeadm.go:318] 
	I1025 10:19:58.675744  604413 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 3k8t0l.jwnokuxogkhw7eil \
	I1025 10:19:58.675903  604413 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:19:58.678054  604413 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:19:58.678236  604413 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:19:58.678291  604413 cni.go:84] Creating CNI manager for ""
	I1025 10:19:58.678313  604413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:19:58.680139  604413 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:19:56.393687  613485 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-767846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.21513694s)
	I1025 10:19:56.393727  613485 kic.go:203] duration metric: took 5.215302211s to extract preloaded images to volume ...
	W1025 10:19:56.393805  613485 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:19:56.393846  613485 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:19:56.393884  613485 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:19:56.473605  613485 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-767846 --name default-k8s-diff-port-767846 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-767846 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-767846 --network default-k8s-diff-port-767846 --ip 192.168.103.2 --volume default-k8s-diff-port-767846:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:19:56.850700  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Running}}
	I1025 10:19:56.880975  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:19:56.909279  613485 cli_runner.go:164] Run: docker exec default-k8s-diff-port-767846 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:19:56.971718  613485 oci.go:144] the created container "default-k8s-diff-port-767846" has a running status.
	I1025 10:19:56.971756  613485 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa...
	I1025 10:19:57.519285  613485 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:19:57.547618  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:19:57.569180  613485 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:19:57.569203  613485 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-767846 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:19:57.614369  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:19:57.633418  613485 machine.go:93] provisionDockerMachine start ...
	I1025 10:19:57.633506  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:57.652654  613485 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:57.652931  613485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 10:19:57.652960  613485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:19:57.814119  613485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-767846
	
	I1025 10:19:57.814151  613485 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-767846"
	I1025 10:19:57.814230  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:57.839784  613485 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:57.840139  613485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 10:19:57.840160  613485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-767846 && echo "default-k8s-diff-port-767846" | sudo tee /etc/hostname
	I1025 10:19:58.013594  613485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-767846
	
	I1025 10:19:58.013675  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:58.035121  613485 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:58.035443  613485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 10:19:58.035491  613485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-767846' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-767846/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-767846' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:19:58.191210  613485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:19:58.191245  613485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:19:58.191287  613485 ubuntu.go:190] setting up certificates
	I1025 10:19:58.191306  613485 provision.go:84] configureAuth start
	I1025 10:19:58.191418  613485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:19:58.212235  613485 provision.go:143] copyHostCerts
	I1025 10:19:58.212326  613485 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:19:58.212341  613485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:19:58.212436  613485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:19:58.212563  613485 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:19:58.212576  613485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:19:58.212623  613485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:19:58.212706  613485 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:19:58.212716  613485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:19:58.212749  613485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:19:58.212823  613485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-767846 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-767846 localhost minikube]
	I1025 10:19:58.310526  613485 provision.go:177] copyRemoteCerts
	I1025 10:19:58.310588  613485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:19:58.310624  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:58.330042  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:19:58.436499  613485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:19:58.460312  613485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 10:19:58.483710  613485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:19:58.509494  613485 provision.go:87] duration metric: took 318.100835ms to configureAuth
	I1025 10:19:58.509525  613485 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:19:58.509766  613485 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:19:58.510006  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:58.533575  613485 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:58.533899  613485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 10:19:58.533934  613485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:19:58.838690  613485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:19:58.838720  613485 machine.go:96] duration metric: took 1.205279016s to provisionDockerMachine
	I1025 10:19:58.838733  613485 client.go:171] duration metric: took 8.317077222s to LocalClient.Create
	I1025 10:19:58.838751  613485 start.go:167] duration metric: took 8.317147053s to libmachine.API.Create "default-k8s-diff-port-767846"
	I1025 10:19:58.838762  613485 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:19:58.838774  613485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:19:58.838841  613485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:19:58.838910  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:58.863817  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:19:58.978963  613485 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:19:58.984512  613485 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:19:58.984555  613485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:19:58.984569  613485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:19:58.984635  613485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:19:58.984750  613485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:19:58.984881  613485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:19:58.998539  613485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:19:59.028361  613485 start.go:296] duration metric: took 189.582473ms for postStartSetup
	I1025 10:19:59.028725  613485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:19:59.055795  613485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/config.json ...
	I1025 10:19:59.056280  613485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:19:59.056376  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:59.084201  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:19:59.196072  613485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:19:59.203397  613485 start.go:128] duration metric: took 8.685042466s to createHost
	I1025 10:19:59.203429  613485 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 8.685214434s
	I1025 10:19:59.203507  613485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:19:59.227684  613485 ssh_runner.go:195] Run: cat /version.json
	I1025 10:19:59.227704  613485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:19:59.227759  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:59.227786  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:19:59.251725  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:19:59.253703  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:19:59.414185  613485 ssh_runner.go:195] Run: systemctl --version
	I1025 10:19:59.421943  613485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:19:59.468115  613485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:19:59.473732  613485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:19:59.473812  613485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:19:59.504309  613485 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:19:59.504361  613485 start.go:495] detecting cgroup driver to use...
	I1025 10:19:59.504399  613485 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:19:59.504455  613485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:19:59.529694  613485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:19:59.545483  613485 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:19:59.545547  613485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:19:59.568592  613485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:19:59.590426  613485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:19:59.690515  613485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:19:59.812303  613485 docker.go:234] disabling docker service ...
	I1025 10:19:59.812413  613485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:19:59.837445  613485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:19:59.855457  613485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:19:59.959554  613485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:00.048196  613485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:00.063668  613485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:00.081936  613485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:20:00.082006  613485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.095772  613485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:00.095848  613485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.107251  613485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.118555  613485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.129447  613485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:00.139861  613485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.153443  613485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.172024  613485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.184764  613485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:00.194459  613485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:00.204982  613485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:00.308181  613485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:00.439863  613485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:00.439938  613485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:00.445162  613485 start.go:563] Will wait 60s for crictl version
	I1025 10:20:00.445238  613485 ssh_runner.go:195] Run: which crictl
	I1025 10:20:00.449832  613485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:00.481178  613485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:00.481275  613485 ssh_runner.go:195] Run: crio --version
	I1025 10:20:00.517223  613485 ssh_runner.go:195] Run: crio --version
	I1025 10:20:00.559132  613485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:19:58.681467  604413 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:19:58.686697  604413 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:19:58.686721  604413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:19:58.704268  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:19:58.989716  604413 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:19:58.989809  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:19:58.989844  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-899665 minikube.k8s.io/updated_at=2025_10_25T10_19_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=no-preload-899665 minikube.k8s.io/primary=true
	I1025 10:19:59.086094  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:19:59.099986  604413 ops.go:34] apiserver oom_adj: -16
	I1025 10:19:59.587011  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:00.087137  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:00.586419  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:01.086894  604413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 25 10:19:48 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:48.542162971Z" level=info msg="Starting container: e67b8a16b593c64325ffa74095a3d0401749d016fcf41285b552b8ee35a0cdf6" id=b041dd43-4538-4eea-87bc-a7498910a9d8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:19:48 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:48.54480074Z" level=info msg="Started container" PID=2128 containerID=e67b8a16b593c64325ffa74095a3d0401749d016fcf41285b552b8ee35a0cdf6 description=kube-system/coredns-5dd5756b68-k5644/coredns id=b041dd43-4538-4eea-87bc-a7498910a9d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8fa54268379a1553d795081f59fda252b3be1b617f9afef6d459f5fbf7a160c
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.716260373Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4a572cdc-6e4e-4608-97e2-3d190971a32b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.716410027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.722756569Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f4a513419171036dde796a43b0e94aef872e7e8ce477a8fa5930819e2c90b763 UID:419d2dd5-4eb7-49cf-a8cf-591e99689202 NetNS:/var/run/netns/2b448484-9425-403c-a5f7-e90ac5343061 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000894320}] Aliases:map[]}"
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.722789697Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.735868374Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f4a513419171036dde796a43b0e94aef872e7e8ce477a8fa5930819e2c90b763 UID:419d2dd5-4eb7-49cf-a8cf-591e99689202 NetNS:/var/run/netns/2b448484-9425-403c-a5f7-e90ac5343061 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000894320}] Aliases:map[]}"
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.73607249Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.737111468Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.738333069Z" level=info msg="Ran pod sandbox f4a513419171036dde796a43b0e94aef872e7e8ce477a8fa5930819e2c90b763 with infra container: default/busybox/POD" id=4a572cdc-6e4e-4608-97e2-3d190971a32b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.739834695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e6346b76-b9fe-4c84-b7f6-d8fefb995ccf name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.740104243Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e6346b76-b9fe-4c84-b7f6-d8fefb995ccf name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.740155718Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e6346b76-b9fe-4c84-b7f6-d8fefb995ccf name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.740802539Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=08a72bff-7835-4ff8-b35c-5b22f5b1b2f8 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:19:51 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:51.743423553Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.220691776Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=08a72bff-7835-4ff8-b35c-5b22f5b1b2f8 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.224004701Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bd227c01-9726-40d1-9f43-e44c3fa24cbc name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.260203621Z" level=info msg="Creating container: default/busybox/busybox" id=5220a468-013b-4237-8cb0-724e6d17e64e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.260389221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.366655435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.367394633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.520899468Z" level=info msg="Created container 148e0609358885aa1cc1378a9bda201eebdcda65c976c88e43247523ad2a0dd1: default/busybox/busybox" id=5220a468-013b-4237-8cb0-724e6d17e64e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.521794901Z" level=info msg="Starting container: 148e0609358885aa1cc1378a9bda201eebdcda65c976c88e43247523ad2a0dd1" id=e0e68346-e659-4fed-a679-8d6b2cf5d520 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:19:55 old-k8s-version-714798 crio[774]: time="2025-10-25T10:19:55.524362611Z" level=info msg="Started container" PID=2198 containerID=148e0609358885aa1cc1378a9bda201eebdcda65c976c88e43247523ad2a0dd1 description=default/busybox/busybox id=e0e68346-e659-4fed-a679-8d6b2cf5d520 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4a513419171036dde796a43b0e94aef872e7e8ce477a8fa5930819e2c90b763
	Oct 25 10:20:02 old-k8s-version-714798 crio[774]: time="2025-10-25T10:20:02.514377759Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	148e060935888       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f4a5134191710       busybox                                          default
	e67b8a16b593c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      15 seconds ago      Running             coredns                   0                   c8fa54268379a       coredns-5dd5756b68-k5644                         kube-system
	9adaf49091430       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 seconds ago      Running             storage-provisioner       0                   86ec5e9c7e802       storage-provisioner                              kube-system
	40ef28646d44e       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    26 seconds ago      Running             kindnet-cni               0                   3f0e0167cb7c2       kindnet-g9r7c                                    kube-system
	ed35c5c305d91       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      29 seconds ago      Running             kube-proxy                0                   fd2efa1e895b2       kube-proxy-kqg7q                                 kube-system
	13f2e59258c00       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      47 seconds ago      Running             etcd                      0                   cb82a4f8a98c4       etcd-old-k8s-version-714798                      kube-system
	6aafae7143639       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      47 seconds ago      Running             kube-apiserver            0                   319255e86d6b8       kube-apiserver-old-k8s-version-714798            kube-system
	5d50fd5f8fddf       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      47 seconds ago      Running             kube-controller-manager   0                   a249336fc1e69       kube-controller-manager-old-k8s-version-714798   kube-system
	842869c47f2a6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      47 seconds ago      Running             kube-scheduler            0                   e58263b998607       kube-scheduler-old-k8s-version-714798            kube-system
	
	
	==> coredns [e67b8a16b593c64325ffa74095a3d0401749d016fcf41285b552b8ee35a0cdf6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50774 - 23866 "HINFO IN 4494719882314135084.1990904337771491471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.12162692s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-714798
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-714798
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-714798
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_19_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:19:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-714798
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:19:52 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:19:52 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:19:52 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:19:52 +0000   Sat, 25 Oct 2025 10:19:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-714798
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                ae2946a1-bd36-4e8d-a493-cdd7e65b514c
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-k5644                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-714798                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-g9r7c                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-714798             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-714798    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-kqg7q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-714798             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node old-k8s-version-714798 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node old-k8s-version-714798 event: Registered Node old-k8s-version-714798 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-714798 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [13f2e59258c0025212fe929125235cc92e970d33975c16747d345553a17c85d4] <==
	{"level":"info","ts":"2025-10-25T10:19:17.460697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:19:17.460707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-25T10:19:17.460716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:19:17.461579Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:19:17.462197Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-714798 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:19:17.462211Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:19:17.462231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:19:17.462517Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:19:17.462601Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:19:17.462625Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:19:17.462481Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:19:17.462641Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T10:19:17.463703Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:19:17.463707Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-10-25T10:19:49.448026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.274529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596610800589519 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:294 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:739 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T10:19:49.448238Z","caller":"traceutil/trace.go:171","msg":"trace[911535651] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"364.380476ms","start":"2025-10-25T10:19:49.083829Z","end":"2025-10-25T10:19:49.448209Z","steps":["trace[911535651] 'process raft request'  (duration: 125.144317ms)","trace[911535651] 'compare'  (duration: 238.073339ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:19:49.44828Z","caller":"traceutil/trace.go:171","msg":"trace[1323356008] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"362.617996ms","start":"2025-10-25T10:19:49.085635Z","end":"2025-10-25T10:19:49.448253Z","steps":["trace[1323356008] 'process raft request'  (duration: 362.486098ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:19:49.448342Z","caller":"traceutil/trace.go:171","msg":"trace[775596079] linearizableReadLoop","detail":"{readStateIndex:416; appliedIndex:414; }","duration":"320.256174ms","start":"2025-10-25T10:19:49.128051Z","end":"2025-10-25T10:19:49.448307Z","steps":["trace[775596079] 'read index received'  (duration: 80.857146ms)","trace[775596079] 'applied index is now lower than readState.Index'  (duration: 239.398265ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:19:49.448386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:19:49.08382Z","time spent":"364.477988ms","remote":"127.0.0.1:42214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":796,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:294 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:739 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2025-10-25T10:19:49.448393Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:19:49.085617Z","time spent":"362.725416ms","remote":"127.0.0.1:42522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:362 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" > >"}
	{"level":"warn","ts":"2025-10-25T10:19:49.448612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.578406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41192"}
	{"level":"info","ts":"2025-10-25T10:19:49.448654Z","caller":"traceutil/trace.go:171","msg":"trace[1363941240] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:402; }","duration":"320.627514ms","start":"2025-10-25T10:19:49.128015Z","end":"2025-10-25T10:19:49.448642Z","steps":["trace[1363941240] 'agreement among raft nodes before linearized reading'  (duration: 320.348676ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:19:49.448682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T10:19:49.127997Z","time spent":"320.678063ms","remote":"127.0.0.1:42224","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":8,"response size":41215,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2025-10-25T10:19:55.647459Z","caller":"traceutil/trace.go:171","msg":"trace[9472684] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"123.045574ms","start":"2025-10-25T10:19:55.524392Z","end":"2025-10-25T10:19:55.647437Z","steps":["trace[9472684] 'process raft request'  (duration: 122.881217ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:19:56.01551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.687545ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596610800589584 > lease_revoke:<id:06ed9a1ae12f9a89>","response":"size:28"}
	
	
	==> kernel <==
	 10:20:04 up  2:02,  0 user,  load average: 7.10, 5.01, 6.02
	Linux old-k8s-version-714798 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40ef28646d44e3e6a07ece0376f308b9084e65def0a9e8327d679f5262d80d54] <==
	I1025 10:19:37.777435       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:19:37.777744       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:19:37.777942       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:19:37.777957       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:19:37.777984       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:19:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:19:38.075272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:19:38.075307       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:19:38.075372       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:19:38.075538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:19:38.473942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:19:38.473978       1 metrics.go:72] Registering metrics
	I1025 10:19:38.474075       1 controller.go:711] "Syncing nftables rules"
	I1025 10:19:48.080550       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:19:48.080642       1 main.go:301] handling current node
	I1025 10:19:58.075417       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:19:58.075461       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6aafae7143639cb414e4110aaa12ad75ed37f5c87ff92b2f9edf1a037c0e48b5] <==
	I1025 10:19:18.747370       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:19:18.747423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:19:18.747502       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:19:18.747592       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:19:18.747609       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:19:18.747616       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:19:18.747623       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:19:18.749984       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:19:18.777527       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:19:18.787686       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:19:19.652551       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:19:19.656104       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:19:19.656122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:19:20.204419       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:19:20.252062       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:19:20.362506       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:19:20.369360       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:19:20.370608       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:19:20.380078       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:19:20.690366       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:19:21.728804       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:19:21.742879       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:19:21.753945       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 10:19:34.478926       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:19:34.579794       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5d50fd5f8fddf3ed9a9c25be205693d048dc9d11bf37f2ee6e6972dfecdcfdf7] <==
	I1025 10:19:33.787020       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 10:19:33.824777       1 shared_informer.go:318] Caches are synced for persistent volume
	I1025 10:19:33.830189       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 10:19:33.831086       1 shared_informer.go:318] Caches are synced for disruption
	I1025 10:19:34.218102       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:19:34.258783       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:19:34.258824       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:19:34.484286       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1025 10:19:34.593097       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kqg7q"
	I1025 10:19:34.593517       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g9r7c"
	I1025 10:19:34.687269       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cm4wv"
	I1025 10:19:34.694093       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-k5644"
	I1025 10:19:34.704103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="220.054761ms"
	I1025 10:19:34.712001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.824297ms"
	I1025 10:19:34.713178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.758µs"
	I1025 10:19:35.603907       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 10:19:35.718221       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cm4wv"
	I1025 10:19:35.733475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.475461ms"
	I1025 10:19:35.742945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.404451ms"
	I1025 10:19:35.743888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.528µs"
	I1025 10:19:48.167079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="190.827µs"
	I1025 10:19:48.179028       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="113.365µs"
	I1025 10:19:48.679033       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1025 10:19:49.450013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="369.219478ms"
	I1025 10:19:49.450135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.152µs"
	
	
	==> kube-proxy [ed35c5c305d917a363ba4c6c4501c9122a5bd5ced81c271fd0a9f6bb2a107e00] <==
	I1025 10:19:35.153792       1 server_others.go:69] "Using iptables proxy"
	I1025 10:19:35.172276       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:19:35.217701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:19:35.225409       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:19:35.225529       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:19:35.225558       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:19:35.225639       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:19:35.225985       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:19:35.226528       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:19:35.228005       1 config.go:188] "Starting service config controller"
	I1025 10:19:35.228098       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:19:35.229212       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:19:35.229720       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:19:35.229309       1 config.go:315] "Starting node config controller"
	I1025 10:19:35.232814       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:19:35.328759       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:19:35.330199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:19:35.333488       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [842869c47f2a6fa3784c4b7a28d41c2bb99a7e4b0e791eaffa4efa95dcdd0906] <==
	W1025 10:19:19.606849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 10:19:19.606906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 10:19:19.608142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:19:19.608169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 10:19:19.635380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 10:19:19.635423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 10:19:19.669449       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 10:19:19.669501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 10:19:19.675173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:19:19.675215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:19:19.676160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 10:19:19.676195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 10:19:19.702174       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 10:19:19.702218       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:19:19.708669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:19:19.708709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:19:19.732178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:19:19.732218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:19:19.770350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 10:19:19.770479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 10:19:19.786210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 10:19:19.786254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 10:19:19.872913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 10:19:19.872960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1025 10:19:22.030461       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:19:33 old-k8s-version-714798 kubelet[1370]: I1025 10:19:33.751486    1370 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:19:33 old-k8s-version-714798 kubelet[1370]: I1025 10:19:33.752575    1370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.601730    1370 topology_manager.go:215] "Topology Admit Handler" podUID="e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b" podNamespace="kube-system" podName="kube-proxy-kqg7q"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.603577    1370 topology_manager.go:215] "Topology Admit Handler" podUID="b38a2108-5fba-42dd-82ea-22ed6eafbe86" podNamespace="kube-system" podName="kindnet-g9r7c"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672131    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b-xtables-lock\") pod \"kube-proxy-kqg7q\" (UID: \"e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b\") " pod="kube-system/kube-proxy-kqg7q"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672203    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b-kube-proxy\") pod \"kube-proxy-kqg7q\" (UID: \"e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b\") " pod="kube-system/kube-proxy-kqg7q"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672237    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b38a2108-5fba-42dd-82ea-22ed6eafbe86-lib-modules\") pod \"kindnet-g9r7c\" (UID: \"b38a2108-5fba-42dd-82ea-22ed6eafbe86\") " pod="kube-system/kindnet-g9r7c"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672273    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65d8\" (UniqueName: \"kubernetes.io/projected/b38a2108-5fba-42dd-82ea-22ed6eafbe86-kube-api-access-g65d8\") pod \"kindnet-g9r7c\" (UID: \"b38a2108-5fba-42dd-82ea-22ed6eafbe86\") " pod="kube-system/kindnet-g9r7c"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672306    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b-lib-modules\") pod \"kube-proxy-kqg7q\" (UID: \"e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b\") " pod="kube-system/kube-proxy-kqg7q"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672381    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b38a2108-5fba-42dd-82ea-22ed6eafbe86-cni-cfg\") pod \"kindnet-g9r7c\" (UID: \"b38a2108-5fba-42dd-82ea-22ed6eafbe86\") " pod="kube-system/kindnet-g9r7c"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672416    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4rkj\" (UniqueName: \"kubernetes.io/projected/e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b-kube-api-access-n4rkj\") pod \"kube-proxy-kqg7q\" (UID: \"e6fe02fa-9fa4-4ff6-967f-e6f1bdeb8d6b\") " pod="kube-system/kube-proxy-kqg7q"
	Oct 25 10:19:34 old-k8s-version-714798 kubelet[1370]: I1025 10:19:34.672443    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b38a2108-5fba-42dd-82ea-22ed6eafbe86-xtables-lock\") pod \"kindnet-g9r7c\" (UID: \"b38a2108-5fba-42dd-82ea-22ed6eafbe86\") " pod="kube-system/kindnet-g9r7c"
	Oct 25 10:19:35 old-k8s-version-714798 kubelet[1370]: I1025 10:19:35.878558    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kqg7q" podStartSLOduration=1.878495056 podCreationTimestamp="2025-10-25 10:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:35.878397051 +0000 UTC m=+14.188445249" watchObservedRunningTime="2025-10-25 10:19:35.878495056 +0000 UTC m=+14.188543254"
	Oct 25 10:19:37 old-k8s-version-714798 kubelet[1370]: I1025 10:19:37.897679    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-g9r7c" podStartSLOduration=1.33990828 podCreationTimestamp="2025-10-25 10:19:34 +0000 UTC" firstStartedPulling="2025-10-25 10:19:34.9154507 +0000 UTC m=+13.225498895" lastFinishedPulling="2025-10-25 10:19:37.473154019 +0000 UTC m=+15.783202211" observedRunningTime="2025-10-25 10:19:37.897197669 +0000 UTC m=+16.207245867" watchObservedRunningTime="2025-10-25 10:19:37.897611596 +0000 UTC m=+16.207659794"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.144134    1370 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.167263    1370 topology_manager.go:215] "Topology Admit Handler" podUID="2c88bd24-b8f1-44bf-83de-2052b4b210fc" podNamespace="kube-system" podName="coredns-5dd5756b68-k5644"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.168648    1370 topology_manager.go:215] "Topology Admit Handler" podUID="fa27e0de-acda-44a9-a974-7abe0a4c94df" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.265793    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnzc\" (UniqueName: \"kubernetes.io/projected/fa27e0de-acda-44a9-a974-7abe0a4c94df-kube-api-access-ffnzc\") pod \"storage-provisioner\" (UID: \"fa27e0de-acda-44a9-a974-7abe0a4c94df\") " pod="kube-system/storage-provisioner"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.265968    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa27e0de-acda-44a9-a974-7abe0a4c94df-tmp\") pod \"storage-provisioner\" (UID: \"fa27e0de-acda-44a9-a974-7abe0a4c94df\") " pod="kube-system/storage-provisioner"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.266029    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27j7\" (UniqueName: \"kubernetes.io/projected/2c88bd24-b8f1-44bf-83de-2052b4b210fc-kube-api-access-z27j7\") pod \"coredns-5dd5756b68-k5644\" (UID: \"2c88bd24-b8f1-44bf-83de-2052b4b210fc\") " pod="kube-system/coredns-5dd5756b68-k5644"
	Oct 25 10:19:48 old-k8s-version-714798 kubelet[1370]: I1025 10:19:48.266068    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c88bd24-b8f1-44bf-83de-2052b4b210fc-config-volume\") pod \"coredns-5dd5756b68-k5644\" (UID: \"2c88bd24-b8f1-44bf-83de-2052b4b210fc\") " pod="kube-system/coredns-5dd5756b68-k5644"
	Oct 25 10:19:49 old-k8s-version-714798 kubelet[1370]: I1025 10:19:49.080911    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-k5644" podStartSLOduration=15.080847527 podCreationTimestamp="2025-10-25 10:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:49.080512258 +0000 UTC m=+27.390560459" watchObservedRunningTime="2025-10-25 10:19:49.080847527 +0000 UTC m=+27.390895734"
	Oct 25 10:19:49 old-k8s-version-714798 kubelet[1370]: I1025 10:19:49.081058    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.081028025 podCreationTimestamp="2025-10-25 10:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:48.973099585 +0000 UTC m=+27.283147780" watchObservedRunningTime="2025-10-25 10:19:49.081028025 +0000 UTC m=+27.391076225"
	Oct 25 10:19:51 old-k8s-version-714798 kubelet[1370]: I1025 10:19:51.414077    1370 topology_manager.go:215] "Topology Admit Handler" podUID="419d2dd5-4eb7-49cf-a8cf-591e99689202" podNamespace="default" podName="busybox"
	Oct 25 10:19:51 old-k8s-version-714798 kubelet[1370]: I1025 10:19:51.489401    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb29f\" (UniqueName: \"kubernetes.io/projected/419d2dd5-4eb7-49cf-a8cf-591e99689202-kube-api-access-lb29f\") pod \"busybox\" (UID: \"419d2dd5-4eb7-49cf-a8cf-591e99689202\") " pod="default/busybox"
	
	
	==> storage-provisioner [9adaf4909143081f3a0f8c56ce1affe8668dbce98f8c5ed81096e486a28b1fc8] <==
	I1025 10:19:48.551455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:19:48.561374       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:19:48.561523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:19:48.571236       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:19:48.571511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714798_1e899b5e-3e68-4bf1-a851-abe7f7f0d188!
	I1025 10:19:48.571656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"989e4d24-c526-4f80-8238-4bbd30d72adb", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-714798_1e899b5e-3e68-4bf1-a851-abe7f7f0d188 became leader
	I1025 10:19:48.672541       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714798_1e899b5e-3e68-4bf1-a851-abe7f7f0d188!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-714798 -n old-k8s-version-714798
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-714798 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (294.835535ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-899665 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-899665 describe deploy/metrics-server -n kube-system: exit status 1 (72.47445ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-899665 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-899665
helpers_test.go:243: (dbg) docker inspect no-preload-899665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192",
	        "Created": "2025-10-25T10:19:22.595874496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604928,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:19:22.643370449Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/hostname",
	        "HostsPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/hosts",
	        "LogPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192-json.log",
	        "Name": "/no-preload-899665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-899665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-899665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192",
	                "LowerDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-899665",
	                "Source": "/var/lib/docker/volumes/no-preload-899665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-899665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-899665",
	                "name.minikube.sigs.k8s.io": "no-preload-899665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6872c91aa415d178bb10ba51039ded5ac8803d77f7861e6cae650b6d5dd6bccf",
	            "SandboxKey": "/var/run/docker/netns/6872c91aa415",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-899665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:cc:46:8a:b8:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c8aca1f62a354ce1975d9d9ac93fc72b53c6dd0c4c9ae45ab02ef47d3a0fdf93",
	                    "EndpointID": "d04730b9e86e58038fcd773c6e16242d823723d5e191fc98f1150bdf3a78219e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-899665",
	                        "695e74f3d798"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-899665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-899665 logs -n 25: (1.294956017s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p flannel-119085 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                        │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat docker --no-pager                                                                                                                                                                                        │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/docker/daemon.json                                                                                                                                                                                            │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo docker system info                                                                                                                                                                                                     │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo containerd config dump                                                                                                                                                                                                 │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo crio config                                                                                                                                                                                                            │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966      │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665      │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:20:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:20:23.300709  624632 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:20:23.301096  624632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:23.301114  624632 out.go:374] Setting ErrFile to fd 2...
	I1025 10:20:23.301122  624632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:23.301572  624632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:20:23.302262  624632 out.go:368] Setting JSON to false
	I1025 10:20:23.304299  624632 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7372,"bootTime":1761380251,"procs":417,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:20:23.304458  624632 start.go:141] virtualization: kvm guest
	I1025 10:20:23.306960  624632 out.go:179] * [old-k8s-version-714798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:20:23.308909  624632 notify.go:220] Checking for updates...
	I1025 10:20:23.309498  624632 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:20:23.311348  624632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:20:23.313341  624632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:23.315424  624632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:20:23.317047  624632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:20:23.319372  624632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:20:23.322053  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:23.324462  624632 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:20:22.722087  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:20:22.723533  613485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.723565  613485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:22.723639  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:20:22.752475  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:20:22.759476  613485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:22.759507  613485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:22.759575  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:20:22.794357  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:20:22.832395  613485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:20:22.930076  613485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:22.934919  613485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.938143  613485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:23.068420  613485 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 10:20:23.069958  613485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:20:23.362383  613485 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:20:23.326650  624632 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:20:23.361560  624632 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:20:23.362134  624632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:23.474991  624632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 10:20:23.456103682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:23.475150  624632 docker.go:318] overlay module found
	I1025 10:20:23.476788  624632 out.go:179] * Using the docker driver based on existing profile
	I1025 10:20:23.478398  624632 start.go:305] selected driver: docker
	I1025 10:20:23.478425  624632 start.go:925] validating driver "docker" against &{Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:23.478569  624632 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:20:23.479393  624632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:23.571473  624632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 10:20:23.559687458 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:23.571948  624632 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:20:23.572037  624632 cni.go:84] Creating CNI manager for ""
	I1025 10:20:23.572109  624632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:23.572191  624632 start.go:349] cluster config:
	{Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:23.574966  624632 out.go:179] * Starting "old-k8s-version-714798" primary control-plane node in "old-k8s-version-714798" cluster
	I1025 10:20:23.576372  624632 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:20:23.577799  624632 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:20:23.579475  624632 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:20:23.579510  624632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:20:23.579535  624632 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 10:20:23.579548  624632 cache.go:58] Caching tarball of preloaded images
	I1025 10:20:23.579656  624632 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:20:23.579675  624632 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:20:23.579810  624632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/config.json ...
	I1025 10:20:23.607233  624632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:20:23.607260  624632 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:20:23.607282  624632 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:20:23.607349  624632 start.go:360] acquireMachinesLock for old-k8s-version-714798: {Name:mk97e2141704e9680122a6db3eca4557d7d2aee2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:23.607435  624632 start.go:364] duration metric: took 51.014µs to acquireMachinesLock for "old-k8s-version-714798"
	I1025 10:20:23.607461  624632 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:20:23.607471  624632 fix.go:54] fixHost starting: 
	I1025 10:20:23.607767  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:23.629577  624632 fix.go:112] recreateIfNeeded on old-k8s-version-714798: state=Stopped err=<nil>
	W1025 10:20:23.629619  624632 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:20:23.363674  613485 addons.go:514] duration metric: took 674.45378ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:20:23.574181  613485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-767846" context rescaled to 1 replicas
	W1025 10:20:25.074694  613485 node_ready.go:57] node "default-k8s-diff-port-767846" has "Ready":"False" status (will retry)
	I1025 10:20:23.631403  624632 out.go:252] * Restarting existing docker container for "old-k8s-version-714798" ...
	I1025 10:20:23.631491  624632 cli_runner.go:164] Run: docker start old-k8s-version-714798
	I1025 10:20:23.932468  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:23.956085  624632 kic.go:430] container "old-k8s-version-714798" state is running.
	I1025 10:20:23.956547  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:23.978748  624632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/config.json ...
	I1025 10:20:23.979037  624632 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:23.979124  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:24.001727  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:24.002092  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:24.002114  624632 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:24.003059  624632 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47234->127.0.0.1:33108: read: connection reset by peer
	I1025 10:20:27.149919  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714798
	
	I1025 10:20:27.149957  624632 ubuntu.go:182] provisioning hostname "old-k8s-version-714798"
	I1025 10:20:27.150022  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.169715  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.170006  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.170026  624632 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-714798 && echo "old-k8s-version-714798" | sudo tee /etc/hostname
	I1025 10:20:27.339339  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714798
	
	I1025 10:20:27.339446  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.361033  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.361258  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.361276  624632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-714798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-714798/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-714798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:27.509983  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:27.510028  624632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:27.510057  624632 ubuntu.go:190] setting up certificates
	I1025 10:20:27.510072  624632 provision.go:84] configureAuth start
	I1025 10:20:27.510153  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:27.529756  624632 provision.go:143] copyHostCerts
	I1025 10:20:27.529844  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:27.529877  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:27.529973  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:27.530097  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:27.530106  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:27.530135  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:27.530196  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:27.530203  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:27.530228  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:27.530280  624632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-714798 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-714798]
	I1025 10:20:27.651694  624632 provision.go:177] copyRemoteCerts
	I1025 10:20:27.651767  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:27.651805  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.671792  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:27.786744  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:27.810211  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:20:27.831489  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:27.856169  624632 provision.go:87] duration metric: took 346.080135ms to configureAuth
	I1025 10:20:27.856203  624632 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:27.856399  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:27.856502  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.877756  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.877983  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.878001  624632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:28.211904  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:28.211952  624632 machine.go:96] duration metric: took 4.232896794s to provisionDockerMachine
	I1025 10:20:28.211969  624632 start.go:293] postStartSetup for "old-k8s-version-714798" (driver="docker")
	I1025 10:20:28.211983  624632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:28.212062  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:28.212116  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.232261  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.682878  621097 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:20:28.682977  621097 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:20:28.683089  621097 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:20:28.683161  621097 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:20:28.683210  621097 kubeadm.go:318] OS: Linux
	I1025 10:20:28.683260  621097 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:20:28.683364  621097 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:20:28.683439  621097 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:20:28.683515  621097 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:20:28.683579  621097 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:20:28.683655  621097 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:20:28.683732  621097 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:20:28.683808  621097 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:20:28.683935  621097 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:20:28.684057  621097 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:20:28.684208  621097 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:20:28.684296  621097 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:20:28.688528  621097 out.go:252]   - Generating certificates and keys ...
	I1025 10:20:28.688611  621097 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:20:28.688666  621097 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:20:28.688720  621097 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:20:28.688766  621097 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:20:28.688835  621097 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:20:28.688881  621097 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:20:28.688925  621097 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:20:28.689044  621097 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-667966] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:20:28.689111  621097 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:20:28.689223  621097 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-667966] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:20:28.689297  621097 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:20:28.689401  621097 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:20:28.689469  621097 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:20:28.689557  621097 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:20:28.689639  621097 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:20:28.689728  621097 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:20:28.689811  621097 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:20:28.689901  621097 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:20:28.689989  621097 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:20:28.690121  621097 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:20:28.690215  621097 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:20:28.691995  621097 out.go:252]   - Booting up control plane ...
	I1025 10:20:28.692112  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:20:28.692207  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:20:28.692290  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:20:28.692454  621097 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:20:28.692597  621097 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:20:28.692781  621097 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:20:28.692909  621097 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:20:28.692983  621097 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:20:28.693124  621097 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:20:28.693209  621097 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:20:28.693263  621097 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 505.345924ms
	I1025 10:20:28.693406  621097 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:20:28.693520  621097 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:20:28.693632  621097 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:20:28.693745  621097 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:20:28.693848  621097 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.855252313s
	I1025 10:20:28.693938  621097 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.449590605s
	I1025 10:20:28.694035  621097 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501140692s
	I1025 10:20:28.694201  621097 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:20:28.694408  621097 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:20:28.694459  621097 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:20:28.694719  621097 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-667966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:20:28.694815  621097 kubeadm.go:318] [bootstrap-token] Using token: a7ffqx.vn3kytu0edce2nju
	I1025 10:20:28.696404  621097 out.go:252]   - Configuring RBAC rules ...
	I1025 10:20:28.696521  621097 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:20:28.696638  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:20:28.696841  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:20:28.697023  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:20:28.697209  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:20:28.697373  621097 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:20:28.697489  621097 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:20:28.697532  621097 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:20:28.697570  621097 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:20:28.697576  621097 kubeadm.go:318] 
	I1025 10:20:28.697634  621097 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:20:28.697640  621097 kubeadm.go:318] 
	I1025 10:20:28.697702  621097 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:20:28.697707  621097 kubeadm.go:318] 
	I1025 10:20:28.697727  621097 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:20:28.697779  621097 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:20:28.697820  621097 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:20:28.697825  621097 kubeadm.go:318] 
	I1025 10:20:28.697872  621097 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:20:28.697878  621097 kubeadm.go:318] 
	I1025 10:20:28.697921  621097 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:20:28.697929  621097 kubeadm.go:318] 
	I1025 10:20:28.697972  621097 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:20:28.698046  621097 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:20:28.698116  621097 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:20:28.698123  621097 kubeadm.go:318] 
	I1025 10:20:28.698207  621097 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:20:28.698307  621097 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:20:28.698314  621097 kubeadm.go:318] 
	I1025 10:20:28.698460  621097 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a7ffqx.vn3kytu0edce2nju \
	I1025 10:20:28.698596  621097 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:20:28.698645  621097 kubeadm.go:318] 	--control-plane 
	I1025 10:20:28.698660  621097 kubeadm.go:318] 
	I1025 10:20:28.698736  621097 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:20:28.698742  621097 kubeadm.go:318] 
	I1025 10:20:28.698805  621097 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a7ffqx.vn3kytu0edce2nju \
	I1025 10:20:28.698920  621097 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:20:28.698939  621097 cni.go:84] Creating CNI manager for ""
	I1025 10:20:28.698949  621097 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:28.700649  621097 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:20:28.340039  624632 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:20:28.344750  624632 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:20:28.344782  624632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:20:28.344794  624632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:20:28.344852  624632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:20:28.344924  624632 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:20:28.345014  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:20:28.354165  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:28.376400  624632 start.go:296] duration metric: took 164.413023ms for postStartSetup
	I1025 10:20:28.376511  624632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:20:28.376561  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.396278  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.499237  624632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:20:28.505254  624632 fix.go:56] duration metric: took 4.897772727s for fixHost
	I1025 10:20:28.505282  624632 start.go:83] releasing machines lock for "old-k8s-version-714798", held for 4.897835155s
	I1025 10:20:28.505364  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:28.525461  624632 ssh_runner.go:195] Run: cat /version.json
	I1025 10:20:28.525531  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.525548  624632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:20:28.525627  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.546453  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.546847  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.706337  624632 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:28.713808  624632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:20:28.756282  624632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:20:28.762374  624632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:20:28.762449  624632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:28.772619  624632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:20:28.772659  624632 start.go:495] detecting cgroup driver to use...
	I1025 10:20:28.772697  624632 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:20:28.772751  624632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:28.791535  624632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:28.808988  624632 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:28.809059  624632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:28.828558  624632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:28.846436  624632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:28.957533  624632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:29.097529  624632 docker.go:234] disabling docker service ...
	I1025 10:20:29.097602  624632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:29.117544  624632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:29.134925  624632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:29.247635  624632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:29.360206  624632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:29.376166  624632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:29.396952  624632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:20:29.397006  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.407930  624632 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:29.408000  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.419235  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.430133  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.440543  624632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:29.451373  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.462502  624632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.473297  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.484426  624632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:29.494052  624632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:29.502966  624632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:29.603708  624632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:29.747929  624632 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:29.748008  624632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:29.753943  624632 start.go:563] Will wait 60s for crictl version
	I1025 10:20:29.754044  624632 ssh_runner.go:195] Run: which crictl
	I1025 10:20:29.759141  624632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:29.795398  624632 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:29.795496  624632 ssh_runner.go:195] Run: crio --version
	I1025 10:20:29.832578  624632 ssh_runner.go:195] Run: crio --version
	I1025 10:20:29.869113  624632 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 25 10:20:17 no-preload-899665 crio[778]: time="2025-10-25T10:20:17.509921346Z" level=info msg="Starting container: b9f8b0e4e6d2a584724d4581ec4b8805f7f7c31317cdb56a5cb1461d3a26af3d" id=ce2ae174-2cc0-445f-a629-1dc6bae8e33e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:17 no-preload-899665 crio[778]: time="2025-10-25T10:20:17.514631848Z" level=info msg="Started container" PID=2922 containerID=b9f8b0e4e6d2a584724d4581ec4b8805f7f7c31317cdb56a5cb1461d3a26af3d description=kube-system/coredns-66bc5c9577-gtnvx/coredns id=ce2ae174-2cc0-445f-a629-1dc6bae8e33e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3a7fb63929557afa34a5da75d7d68ae03d76d37cf7a516a0cb63027975d2384
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.453115884Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fa0d59ef-2447-4fea-8030-8a08ce1b6aa0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.453241051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.460397602Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:03197364813f1628c24e515acfe4c4bc17b5e6c0a8e351bc4f56d31fc997c63c UID:ec5c2e6d-1ade-45df-8269-93809b94484b NetNS:/var/run/netns/b6d93f79-7bdc-4668-a2df-3155c25c6b74 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b838}] Aliases:map[]}"
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.460432309Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.473180968Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:03197364813f1628c24e515acfe4c4bc17b5e6c0a8e351bc4f56d31fc997c63c UID:ec5c2e6d-1ade-45df-8269-93809b94484b NetNS:/var/run/netns/b6d93f79-7bdc-4668-a2df-3155c25c6b74 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b838}] Aliases:map[]}"
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.473392921Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.474573594Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.475626825Z" level=info msg="Ran pod sandbox 03197364813f1628c24e515acfe4c4bc17b5e6c0a8e351bc4f56d31fc997c63c with infra container: default/busybox/POD" id=fa0d59ef-2447-4fea-8030-8a08ce1b6aa0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.477025033Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=975e8da2-3997-458c-86ab-54413d326296 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.477145445Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=975e8da2-3997-458c-86ab-54413d326296 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.477179863Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=975e8da2-3997-458c-86ab-54413d326296 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.477750467Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2059292b-ce7e-42f6-a94c-b22364421b05 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:20:20 no-preload-899665 crio[778]: time="2025-10-25T10:20:20.479431412Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.514236204Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2059292b-ce7e-42f6-a94c-b22364421b05 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.515081198Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ac8fcae-94d8-4afe-924b-1689ac795493 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.516859382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc823cd6-df3e-4954-b077-ad53589edc5c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.521139099Z" level=info msg="Creating container: default/busybox/busybox" id=97479a94-79fb-4db2-8b0b-b792e0a335d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.521300264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.525131652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.525695074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.554395067Z" level=info msg="Created container 3f481755fea9ea305d27639f8cfdb3d39c84c4a0d40f47c93aa621000135a9f3: default/busybox/busybox" id=97479a94-79fb-4db2-8b0b-b792e0a335d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.555204062Z" level=info msg="Starting container: 3f481755fea9ea305d27639f8cfdb3d39c84c4a0d40f47c93aa621000135a9f3" id=5384882c-e709-408b-b7d1-870953857395 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:22 no-preload-899665 crio[778]: time="2025-10-25T10:20:22.557125639Z" level=info msg="Started container" PID=2996 containerID=3f481755fea9ea305d27639f8cfdb3d39c84c4a0d40f47c93aa621000135a9f3 description=default/busybox/busybox id=5384882c-e709-408b-b7d1-870953857395 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03197364813f1628c24e515acfe4c4bc17b5e6c0a8e351bc4f56d31fc997c63c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3f481755fea9e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   03197364813f1       busybox                                     default
	b9f8b0e4e6d2a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   f3a7fb6392955       coredns-66bc5c9577-gtnvx                    kube-system
	3fe5b8bdc7d18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   4569a92feae24       storage-provisioner                         kube-system
	bc919e2616965       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   210f7af0c4a85       kindnet-sjskf                               kube-system
	edfec7bfd8297       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      27 seconds ago      Running             kube-proxy                0                   d7b10a35917dc       kube-proxy-fdthr                            kube-system
	189e5eb9cf4fd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      38 seconds ago      Running             kube-apiserver            0                   132e9cf839853       kube-apiserver-no-preload-899665            kube-system
	c20dfb53379dc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      38 seconds ago      Running             kube-scheduler            0                   5914efa7520cb       kube-scheduler-no-preload-899665            kube-system
	04d64f268301e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      38 seconds ago      Running             kube-controller-manager   0                   371ccae89e9c1       kube-controller-manager-no-preload-899665   kube-system
	84050098702c6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      38 seconds ago      Running             etcd                      0                   8eb46274eb61f       etcd-no-preload-899665                      kube-system
	
	
	==> coredns [b9f8b0e4e6d2a584724d4581ec4b8805f7f7c31317cdb56a5cb1461d3a26af3d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33104 - 62887 "HINFO IN 7324878309645731113.8514552986067078845. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.136047664s
	
	
	==> describe nodes <==
	Name:               no-preload-899665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-899665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=no-preload-899665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_19_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:19:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-899665
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-899665
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9552a4c0-ffdc-4517-8db3-fa4623099c2a
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gtnvx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-899665                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-sjskf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-899665             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-899665    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-fdthr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-899665             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node no-preload-899665 event: Registered Node no-preload-899665 in Controller
	  Normal  NodeReady                13s                kubelet          Node no-preload-899665 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [84050098702c670e3433a2d0ec1f53384478b09ea946db86e78f06726a974eef] <==
	{"level":"info","ts":"2025-10-25T10:19:55.528519Z","caller":"traceutil/trace.go:172","msg":"trace[88543877] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:54; }","duration":"152.944662ms","start":"2025-10-25T10:19:55.375566Z","end":"2025-10-25T10:19:55.528510Z","steps":["trace[88543877] 'agreement among raft nodes before linearized reading'  (duration: 152.797429ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:19:55.528298Z","caller":"traceutil/trace.go:172","msg":"trace[319060401] transaction","detail":"{read_only:false; response_revision:54; number_of_response:1; }","duration":"151.373481ms","start":"2025-10-25T10:19:55.376906Z","end":"2025-10-25T10:19:55.528279Z","steps":["trace[319060401] 'process raft request'  (duration: 151.328064ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:19:55.528376Z","caller":"traceutil/trace.go:172","msg":"trace[1942997630] transaction","detail":"{read_only:false; response_revision:53; number_of_response:1; }","duration":"152.552985ms","start":"2025-10-25T10:19:55.375794Z","end":"2025-10-25T10:19:55.528347Z","steps":["trace[1942997630] 'process raft request'  (duration: 152.397116ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:19:55.647202Z","caller":"traceutil/trace.go:172","msg":"trace[939893828] linearizableReadLoop","detail":"{readStateIndex:58; appliedIndex:58; }","duration":"114.532457ms","start":"2025-10-25T10:19:55.532647Z","end":"2025-10-25T10:19:55.647180Z","steps":["trace[939893828] 'read index received'  (duration: 114.522696ms)","trace[939893828] 'applied index is now lower than readState.Index'  (duration: 8.086µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:19:55.656219Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.547886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T10:19:55.656284Z","caller":"traceutil/trace.go:172","msg":"trace[1241253381] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:0; response_revision:54; }","duration":"123.631889ms","start":"2025-10-25T10:19:55.532637Z","end":"2025-10-25T10:19:55.656269Z","steps":["trace[1241253381] 'agreement among raft nodes before linearized reading'  (duration: 114.621942ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:19:55.656412Z","caller":"traceutil/trace.go:172","msg":"trace[311617315] transaction","detail":"{read_only:false; response_revision:56; number_of_response:1; }","duration":"123.861222ms","start":"2025-10-25T10:19:55.532537Z","end":"2025-10-25T10:19:55.656398Z","steps":["trace[311617315] 'process raft request'  (duration: 123.789185ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:19:55.656412Z","caller":"traceutil/trace.go:172","msg":"trace[1028242718] transaction","detail":"{read_only:false; response_revision:55; number_of_response:1; }","duration":"124.888552ms","start":"2025-10-25T10:19:55.531510Z","end":"2025-10-25T10:19:55.656399Z","steps":["trace[1028242718] 'process raft request'  (duration: 115.623379ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:19:56.034890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.178121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T10:19:56.034963Z","caller":"traceutil/trace.go:172","msg":"trace[2110616630] range","detail":"{range_begin:/registry/clusterroles/system:discovery; range_end:; response_count:0; response_revision:61; }","duration":"343.268344ms","start":"2025-10-25T10:19:55.691679Z","end":"2025-10-25T10:19:56.034947Z","steps":["trace[2110616630] 'agreement among raft nodes before linearized reading'  (duration: 99.607501ms)","trace[2110616630] 'range keys from in-memory index tree'  (duration: 243.527758ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:19:56.035004Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.515371ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356196337570777 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/kube-system-service-accounts\" mod_revision:55 > success:<request_put:<key:\"/registry/flowschemas/kube-system-service-accounts\" value_size:1047 >> failure:<request_range:<key:\"/registry/flowschemas/kube-system-service-accounts\" > >>","response":"size:14"}
	{"level":"warn","ts":"2025-10-25T10:19:56.035002Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T10:19:55.691656Z","time spent":"343.335315ms","remote":"127.0.0.1:54790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:discovery\" limit:1 "}
	{"level":"info","ts":"2025-10-25T10:19:56.035131Z","caller":"traceutil/trace.go:172","msg":"trace[631916003] transaction","detail":"{read_only:false; response_revision:63; number_of_response:1; }","duration":"343.19544ms","start":"2025-10-25T10:19:55.691927Z","end":"2025-10-25T10:19:56.035123Z","steps":["trace[631916003] 'process raft request'  (duration: 343.145579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:19:56.035248Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T10:19:55.691914Z","time spent":"343.304388ms","remote":"127.0.0.1:55018","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":555,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/exempt\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/exempt\" value_size:503 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T10:19:56.035372Z","caller":"traceutil/trace.go:172","msg":"trace[839017937] transaction","detail":"{read_only:false; response_revision:62; number_of_response:1; }","duration":"344.586123ms","start":"2025-10-25T10:19:55.690769Z","end":"2025-10-25T10:19:56.035355Z","steps":["trace[839017937] 'process raft request'  (duration: 100.652601ms)","trace[839017937] 'compare'  (duration: 243.412461ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:19:56.035450Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T10:19:55.690750Z","time spent":"344.652859ms","remote":"127.0.0.1:54984","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1105,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/kube-system-service-accounts\" mod_revision:55 > success:<request_put:<key:\"/registry/flowschemas/kube-system-service-accounts\" value_size:1047 >> failure:<request_range:<key:\"/registry/flowschemas/kube-system-service-accounts\" > >"}
	{"level":"info","ts":"2025-10-25T10:19:56.309912Z","caller":"traceutil/trace.go:172","msg":"trace[2026956518] transaction","detail":"{read_only:false; response_revision:70; number_of_response:1; }","duration":"217.630973ms","start":"2025-10-25T10:19:56.092258Z","end":"2025-10-25T10:19:56.309889Z","steps":["trace[2026956518] 'process raft request'  (duration: 123.322954ms)","trace[2026956518] 'compare'  (duration: 94.125137ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:19:56.309965Z","caller":"traceutil/trace.go:172","msg":"trace[917209137] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"217.203909ms","start":"2025-10-25T10:19:56.092750Z","end":"2025-10-25T10:19:56.309953Z","steps":["trace[917209137] 'process raft request'  (duration: 217.074683ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:20:11.104910Z","caller":"traceutil/trace.go:172","msg":"trace[194576289] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"116.798741ms","start":"2025-10-25T10:20:10.988090Z","end":"2025-10-25T10:20:11.104889Z","steps":["trace[194576289] 'process raft request'  (duration: 116.652746ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:20:11.262032Z","caller":"traceutil/trace.go:172","msg":"trace[1050864771] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"147.16191ms","start":"2025-10-25T10:20:11.114849Z","end":"2025-10-25T10:20:11.262011Z","steps":["trace[1050864771] 'process raft request'  (duration: 146.906065ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:20:11.384478Z","caller":"traceutil/trace.go:172","msg":"trace[1189151547] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"113.993362ms","start":"2025-10-25T10:20:11.270467Z","end":"2025-10-25T10:20:11.384460Z","steps":["trace[1189151547] 'process raft request'  (duration: 111.646871ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:20:11.676937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.076511ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:20:11.677004Z","caller":"traceutil/trace.go:172","msg":"trace[1510450075] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:427; }","duration":"131.164732ms","start":"2025-10-25T10:20:11.545824Z","end":"2025-10-25T10:20:11.676989Z","steps":["trace[1510450075] 'range keys from in-memory index tree'  (duration: 131.033105ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:20:11.677123Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.204464ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356196337571602 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-899665\" mod_revision:311 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-899665\" value_size:7212 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-899665\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T10:20:11.677219Z","caller":"traceutil/trace.go:172","msg":"trace[2087524324] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"282.511942ms","start":"2025-10-25T10:20:11.394690Z","end":"2025-10-25T10:20:11.677202Z","steps":["trace[2087524324] 'process raft request'  (duration: 134.151836ms)","trace[2087524324] 'compare'  (duration: 148.049883ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:20:31 up  2:02,  0 user,  load average: 6.35, 5.00, 6.00
	Linux no-preload-899665 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bc919e2616965a4381b0db64d1334ff0ee4cd0769b05064e1f6978c0c7341f6e] <==
	I1025 10:20:06.580799       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:06.581121       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:20:06.581313       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:06.581352       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:06.581378       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:20:06.785103       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:06.785136       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:06.785156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:20:06.785367       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:20:07.185843       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:20:07.185881       1 metrics.go:72] Registering metrics
	I1025 10:20:07.185951       1 controller.go:711] "Syncing nftables rules"
	I1025 10:20:16.788427       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:20:16.788506       1 main.go:301] handling current node
	I1025 10:20:26.786551       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:20:26.786625       1 main.go:301] handling current node
	
	
	==> kube-apiserver [189e5eb9cf4fdb62289788f5b86c5dee20c26d08aa780eed11e46e1c44445107] <==
	E1025 10:19:54.588429       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1025 10:19:54.612180       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:19:54.615250       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:19:54.626132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:19:54.696062       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:19:54.696305       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:19:55.366041       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:19:55.529765       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:19:55.529830       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:19:57.084695       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:19:57.141242       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:19:57.262728       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:19:57.271451       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:19:57.273208       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:19:57.280094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:19:57.355809       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:19:58.083068       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:19:58.096004       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:19:58.107985       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:20:02.409642       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:02.415892       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:03.309672       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 10:20:03.309672       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 10:20:03.458132       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1025 10:20:29.277174       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:59406: use of closed network connection
	
	
	==> kube-controller-manager [04d64f268301eec2e4e55eaeff907dde8a7a7191099f010e9c25194a8e04d93c] <==
	I1025 10:20:02.353714       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:20:02.353992       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:20:02.354037       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:20:02.355187       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:20:02.355541       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:20:02.357904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:20:02.359625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:20:02.359815       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:20:02.359856       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:20:02.359883       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:20:02.359890       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:20:02.359896       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:20:02.360885       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:20:02.360906       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:02.361047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:20:02.361712       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:02.363223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:20:02.367872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-899665" podCIDRs=["10.244.0.0/24"]
	I1025 10:20:02.372194       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:20:02.381420       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:20:02.381556       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:20:02.381662       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-899665"
	I1025 10:20:02.381722       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:20:02.384032       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:17.384422       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [edfec7bfd8297a5256269901f7dd859159df533e85da690acd8305d100910f84] <==
	I1025 10:20:03.840818       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:03.938444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:04.039412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:04.039459       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:20:04.039552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:20:04.078364       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:04.078446       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:20:04.096442       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:20:04.118544       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:20:04.118666       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:04.130742       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:20:04.130790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:20:04.134796       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:20:04.134891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:20:04.135983       1 config.go:309] "Starting node config controller"
	I1025 10:20:04.136005       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:20:04.136014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:20:04.136355       1 config.go:200] "Starting service config controller"
	I1025 10:20:04.136366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:20:04.136374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:20:04.231951       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:20:04.235092       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c20dfb53379dc4fe16f685f8ce7da501ac8531a8b0d7bf4e1a448a6b320f0b2c] <==
	E1025 10:19:54.657419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:19:54.657515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:19:54.657560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:19:54.657642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:19:54.657635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:19:55.546967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:19:55.583766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:19:55.613373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:19:55.626489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:19:55.652785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 10:19:55.748338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:19:55.751734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:19:55.779563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:19:55.843215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:19:55.937139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:19:55.977990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:19:56.048759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:19:56.083466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:19:56.086607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:19:56.140534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:19:56.146089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:19:56.193237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:19:56.193237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:19:56.245372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1025 10:19:58.153783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:19:58 no-preload-899665 kubelet[2317]: E1025 10:19:58.990028    2317 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-899665\" already exists" pod="kube-system/kube-scheduler-no-preload-899665"
	Oct 25 10:19:59 no-preload-899665 kubelet[2317]: I1025 10:19:59.020172    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-899665" podStartSLOduration=1.020150643 podStartE2EDuration="1.020150643s" podCreationTimestamp="2025-10-25 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:59.00968279 +0000 UTC m=+1.151118327" watchObservedRunningTime="2025-10-25 10:19:59.020150643 +0000 UTC m=+1.161586169"
	Oct 25 10:19:59 no-preload-899665 kubelet[2317]: I1025 10:19:59.031863    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-899665" podStartSLOduration=1.031830616 podStartE2EDuration="1.031830616s" podCreationTimestamp="2025-10-25 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:59.020726377 +0000 UTC m=+1.162161922" watchObservedRunningTime="2025-10-25 10:19:59.031830616 +0000 UTC m=+1.173266164"
	Oct 25 10:19:59 no-preload-899665 kubelet[2317]: I1025 10:19:59.043838    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-899665" podStartSLOduration=1.043814666 podStartE2EDuration="1.043814666s" podCreationTimestamp="2025-10-25 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:59.032162963 +0000 UTC m=+1.173598504" watchObservedRunningTime="2025-10-25 10:19:59.043814666 +0000 UTC m=+1.185250215"
	Oct 25 10:19:59 no-preload-899665 kubelet[2317]: I1025 10:19:59.061355    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-899665" podStartSLOduration=1.061330432 podStartE2EDuration="1.061330432s" podCreationTimestamp="2025-10-25 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:19:59.044437798 +0000 UTC m=+1.185873346" watchObservedRunningTime="2025-10-25 10:19:59.061330432 +0000 UTC m=+1.202765958"
	Oct 25 10:20:02 no-preload-899665 kubelet[2317]: I1025 10:20:02.400809    2317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:20:02 no-preload-899665 kubelet[2317]: I1025 10:20:02.401695    2317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373547    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aea032c1-4c95-4c86-81cc-1fd23a4a3440-xtables-lock\") pod \"kube-proxy-fdthr\" (UID: \"aea032c1-4c95-4c86-81cc-1fd23a4a3440\") " pod="kube-system/kube-proxy-fdthr"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373633    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45q2x\" (UniqueName: \"kubernetes.io/projected/aea032c1-4c95-4c86-81cc-1fd23a4a3440-kube-api-access-45q2x\") pod \"kube-proxy-fdthr\" (UID: \"aea032c1-4c95-4c86-81cc-1fd23a4a3440\") " pod="kube-system/kube-proxy-fdthr"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373661    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/adca7025-fccd-45d0-858a-b64ea960ec85-cni-cfg\") pod \"kindnet-sjskf\" (UID: \"adca7025-fccd-45d0-858a-b64ea960ec85\") " pod="kube-system/kindnet-sjskf"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373683    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98rw4\" (UniqueName: \"kubernetes.io/projected/adca7025-fccd-45d0-858a-b64ea960ec85-kube-api-access-98rw4\") pod \"kindnet-sjskf\" (UID: \"adca7025-fccd-45d0-858a-b64ea960ec85\") " pod="kube-system/kindnet-sjskf"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373716    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aea032c1-4c95-4c86-81cc-1fd23a4a3440-kube-proxy\") pod \"kube-proxy-fdthr\" (UID: \"aea032c1-4c95-4c86-81cc-1fd23a4a3440\") " pod="kube-system/kube-proxy-fdthr"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373735    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adca7025-fccd-45d0-858a-b64ea960ec85-xtables-lock\") pod \"kindnet-sjskf\" (UID: \"adca7025-fccd-45d0-858a-b64ea960ec85\") " pod="kube-system/kindnet-sjskf"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373766    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adca7025-fccd-45d0-858a-b64ea960ec85-lib-modules\") pod \"kindnet-sjskf\" (UID: \"adca7025-fccd-45d0-858a-b64ea960ec85\") " pod="kube-system/kindnet-sjskf"
	Oct 25 10:20:03 no-preload-899665 kubelet[2317]: I1025 10:20:03.373792    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aea032c1-4c95-4c86-81cc-1fd23a4a3440-lib-modules\") pod \"kube-proxy-fdthr\" (UID: \"aea032c1-4c95-4c86-81cc-1fd23a4a3440\") " pod="kube-system/kube-proxy-fdthr"
	Oct 25 10:20:04 no-preload-899665 kubelet[2317]: I1025 10:20:04.019161    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fdthr" podStartSLOduration=1.019134192 podStartE2EDuration="1.019134192s" podCreationTimestamp="2025-10-25 10:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:04.017880421 +0000 UTC m=+6.159315967" watchObservedRunningTime="2025-10-25 10:20:04.019134192 +0000 UTC m=+6.160569738"
	Oct 25 10:20:07 no-preload-899665 kubelet[2317]: I1025 10:20:07.024795    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sjskf" podStartSLOduration=1.462310126 podStartE2EDuration="4.02476883s" podCreationTimestamp="2025-10-25 10:20:03 +0000 UTC" firstStartedPulling="2025-10-25 10:20:03.666377948 +0000 UTC m=+5.807813479" lastFinishedPulling="2025-10-25 10:20:06.228836651 +0000 UTC m=+8.370272183" observedRunningTime="2025-10-25 10:20:07.024393449 +0000 UTC m=+9.165828995" watchObservedRunningTime="2025-10-25 10:20:07.02476883 +0000 UTC m=+9.166204377"
	Oct 25 10:20:17 no-preload-899665 kubelet[2317]: I1025 10:20:17.083345    2317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:20:17 no-preload-899665 kubelet[2317]: I1025 10:20:17.167517    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zj5d\" (UniqueName: \"kubernetes.io/projected/f2d8d6d3-7a6f-461b-9084-c640ecc14248-kube-api-access-7zj5d\") pod \"storage-provisioner\" (UID: \"f2d8d6d3-7a6f-461b-9084-c640ecc14248\") " pod="kube-system/storage-provisioner"
	Oct 25 10:20:17 no-preload-899665 kubelet[2317]: I1025 10:20:17.167588    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f2d8d6d3-7a6f-461b-9084-c640ecc14248-tmp\") pod \"storage-provisioner\" (UID: \"f2d8d6d3-7a6f-461b-9084-c640ecc14248\") " pod="kube-system/storage-provisioner"
	Oct 25 10:20:17 no-preload-899665 kubelet[2317]: I1025 10:20:17.167617    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7rhz\" (UniqueName: \"kubernetes.io/projected/1a53a0ee-a470-493d-903e-89f7603b058d-kube-api-access-c7rhz\") pod \"coredns-66bc5c9577-gtnvx\" (UID: \"1a53a0ee-a470-493d-903e-89f7603b058d\") " pod="kube-system/coredns-66bc5c9577-gtnvx"
	Oct 25 10:20:17 no-preload-899665 kubelet[2317]: I1025 10:20:17.167649    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a53a0ee-a470-493d-903e-89f7603b058d-config-volume\") pod \"coredns-66bc5c9577-gtnvx\" (UID: \"1a53a0ee-a470-493d-903e-89f7603b058d\") " pod="kube-system/coredns-66bc5c9577-gtnvx"
	Oct 25 10:20:18 no-preload-899665 kubelet[2317]: I1025 10:20:18.052755    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gtnvx" podStartSLOduration=15.052731354 podStartE2EDuration="15.052731354s" podCreationTimestamp="2025-10-25 10:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:18.052571966 +0000 UTC m=+20.194007512" watchObservedRunningTime="2025-10-25 10:20:18.052731354 +0000 UTC m=+20.194166900"
	Oct 25 10:20:18 no-preload-899665 kubelet[2317]: I1025 10:20:18.064612    2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.064585981 podStartE2EDuration="14.064585981s" podCreationTimestamp="2025-10-25 10:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:18.06438286 +0000 UTC m=+20.205818406" watchObservedRunningTime="2025-10-25 10:20:18.064585981 +0000 UTC m=+20.206021527"
	Oct 25 10:20:20 no-preload-899665 kubelet[2317]: I1025 10:20:20.185829    2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7xw\" (UniqueName: \"kubernetes.io/projected/ec5c2e6d-1ade-45df-8269-93809b94484b-kube-api-access-wf7xw\") pod \"busybox\" (UID: \"ec5c2e6d-1ade-45df-8269-93809b94484b\") " pod="default/busybox"
	
	
	==> storage-provisioner [3fe5b8bdc7d18195e19f272a3838ed2fbde9f7f77db04c8e3c53933222892007] <==
	I1025 10:20:17.524574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:20:17.537574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:20:17.537666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:20:17.541233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:17.550776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:20:17.550956       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:20:17.551217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-899665_894e670f-05a7-48db-b8f2-325fe956130d!
	I1025 10:20:17.551570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ae89fb7-1d09-4307-b93a-101d3aa3927b", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-899665_894e670f-05a7-48db-b8f2-325fe956130d became leader
	W1025 10:20:17.554873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:17.561762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:20:17.651800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-899665_894e670f-05a7-48db-b8f2-325fe956130d!
	W1025 10:20:19.566906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:19.573056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:21.577178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:21.582047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:23.586875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:23.592810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:25.596267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:25.601092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:27.604611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:27.610555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:29.614282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:29.618941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-899665 -n no-preload-899665
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-899665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (283.608969ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-667966
helpers_test.go:243: (dbg) docker inspect newest-cni-667966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d",
	        "Created": "2025-10-25T10:20:12.207812957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:12.277584865Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/hostname",
	        "HostsPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/hosts",
	        "LogPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d-json.log",
	        "Name": "/newest-cni-667966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-667966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-667966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d",
	                "LowerDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-667966",
	                "Source": "/var/lib/docker/volumes/newest-cni-667966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-667966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-667966",
	                "name.minikube.sigs.k8s.io": "newest-cni-667966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fec79be30fe0e36168a1d0a415ccc5c620dc41ddba8c582bc600b5bb1fb504de",
	            "SandboxKey": "/var/run/docker/netns/fec79be30fe0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-667966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:71:7a:14:e7:b1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1607edd0e575c882979f9db63a22ad5ee1f0aabcbcf3a5dc021515221638bbcb",
	                    "EndpointID": "0a3be67a0047f06c10a5f61684aa9e0e61a06609898f2815aa7928fb6ea98e48",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-667966",
	                        "cede76718eb2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-667966 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-667966 logs -n 25: (1.052262907s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p flannel-119085 sudo cat /etc/docker/daemon.json                                                                                                                                                                                            │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo docker system info                                                                                                                                                                                                     │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo containerd config dump                                                                                                                                                                                                 │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo crio config                                                                                                                                                                                                            │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085         │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966      │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665      │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665      │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966      │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:20:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:20:23.300709  624632 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:20:23.301096  624632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:23.301114  624632 out.go:374] Setting ErrFile to fd 2...
	I1025 10:20:23.301122  624632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:23.301572  624632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:20:23.302262  624632 out.go:368] Setting JSON to false
	I1025 10:20:23.304299  624632 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7372,"bootTime":1761380251,"procs":417,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:20:23.304458  624632 start.go:141] virtualization: kvm guest
	I1025 10:20:23.306960  624632 out.go:179] * [old-k8s-version-714798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:20:23.308909  624632 notify.go:220] Checking for updates...
	I1025 10:20:23.309498  624632 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:20:23.311348  624632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:20:23.313341  624632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:23.315424  624632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:20:23.317047  624632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:20:23.319372  624632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:20:23.322053  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:23.324462  624632 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:20:22.722087  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:20:22.723533  613485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.723565  613485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:22.723639  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:20:22.752475  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:20:22.759476  613485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:22.759507  613485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:22.759575  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:20:22.794357  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:20:22.832395  613485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:20:22.930076  613485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:22.934919  613485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.938143  613485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:23.068420  613485 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 10:20:23.069958  613485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:20:23.362383  613485 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:20:23.326650  624632 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:20:23.361560  624632 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:20:23.362134  624632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:23.474991  624632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 10:20:23.456103682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:23.475150  624632 docker.go:318] overlay module found
	I1025 10:20:23.476788  624632 out.go:179] * Using the docker driver based on existing profile
	I1025 10:20:23.478398  624632 start.go:305] selected driver: docker
	I1025 10:20:23.478425  624632 start.go:925] validating driver "docker" against &{Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:23.478569  624632 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:20:23.479393  624632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:23.571473  624632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 10:20:23.559687458 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:23.571948  624632 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:20:23.572037  624632 cni.go:84] Creating CNI manager for ""
	I1025 10:20:23.572109  624632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:23.572191  624632 start.go:349] cluster config:
	{Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:23.574966  624632 out.go:179] * Starting "old-k8s-version-714798" primary control-plane node in "old-k8s-version-714798" cluster
	I1025 10:20:23.576372  624632 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:20:23.577799  624632 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:20:23.579475  624632 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:20:23.579510  624632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:20:23.579535  624632 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 10:20:23.579548  624632 cache.go:58] Caching tarball of preloaded images
	I1025 10:20:23.579656  624632 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:20:23.579675  624632 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:20:23.579810  624632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/config.json ...
	I1025 10:20:23.607233  624632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:20:23.607260  624632 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:20:23.607282  624632 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:20:23.607349  624632 start.go:360] acquireMachinesLock for old-k8s-version-714798: {Name:mk97e2141704e9680122a6db3eca4557d7d2aee2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:23.607435  624632 start.go:364] duration metric: took 51.014µs to acquireMachinesLock for "old-k8s-version-714798"
	I1025 10:20:23.607461  624632 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:20:23.607471  624632 fix.go:54] fixHost starting: 
	I1025 10:20:23.607767  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:23.629577  624632 fix.go:112] recreateIfNeeded on old-k8s-version-714798: state=Stopped err=<nil>
	W1025 10:20:23.629619  624632 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:20:23.363674  613485 addons.go:514] duration metric: took 674.45378ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:20:23.574181  613485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-767846" context rescaled to 1 replicas
	W1025 10:20:25.074694  613485 node_ready.go:57] node "default-k8s-diff-port-767846" has "Ready":"False" status (will retry)
	I1025 10:20:23.631403  624632 out.go:252] * Restarting existing docker container for "old-k8s-version-714798" ...
	I1025 10:20:23.631491  624632 cli_runner.go:164] Run: docker start old-k8s-version-714798
	I1025 10:20:23.932468  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:23.956085  624632 kic.go:430] container "old-k8s-version-714798" state is running.
	I1025 10:20:23.956547  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:23.978748  624632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/config.json ...
	I1025 10:20:23.979037  624632 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:23.979124  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:24.001727  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:24.002092  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:24.002114  624632 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:24.003059  624632 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47234->127.0.0.1:33108: read: connection reset by peer
	I1025 10:20:27.149919  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714798
	
	I1025 10:20:27.149957  624632 ubuntu.go:182] provisioning hostname "old-k8s-version-714798"
	I1025 10:20:27.150022  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.169715  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.170006  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.170026  624632 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-714798 && echo "old-k8s-version-714798" | sudo tee /etc/hostname
	I1025 10:20:27.339339  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714798
	
	I1025 10:20:27.339446  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.361033  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.361258  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.361276  624632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-714798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-714798/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-714798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:27.509983  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:27.510028  624632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:27.510057  624632 ubuntu.go:190] setting up certificates
	I1025 10:20:27.510072  624632 provision.go:84] configureAuth start
	I1025 10:20:27.510153  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:27.529756  624632 provision.go:143] copyHostCerts
	I1025 10:20:27.529844  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:27.529877  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:27.529973  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:27.530097  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:27.530106  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:27.530135  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:27.530196  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:27.530203  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:27.530228  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:27.530280  624632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-714798 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-714798]
	I1025 10:20:27.651694  624632 provision.go:177] copyRemoteCerts
	I1025 10:20:27.651767  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:27.651805  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.671792  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:27.786744  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:27.810211  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:20:27.831489  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:27.856169  624632 provision.go:87] duration metric: took 346.080135ms to configureAuth
	I1025 10:20:27.856203  624632 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:27.856399  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:27.856502  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.877756  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.877983  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.878001  624632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:28.211904  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:28.211952  624632 machine.go:96] duration metric: took 4.232896794s to provisionDockerMachine
	I1025 10:20:28.211969  624632 start.go:293] postStartSetup for "old-k8s-version-714798" (driver="docker")
	I1025 10:20:28.211983  624632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:28.212062  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:28.212116  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.232261  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.682878  621097 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:20:28.682977  621097 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:20:28.683089  621097 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:20:28.683161  621097 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:20:28.683210  621097 kubeadm.go:318] OS: Linux
	I1025 10:20:28.683260  621097 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:20:28.683364  621097 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:20:28.683439  621097 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:20:28.683515  621097 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:20:28.683579  621097 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:20:28.683655  621097 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:20:28.683732  621097 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:20:28.683808  621097 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:20:28.683935  621097 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:20:28.684057  621097 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:20:28.684208  621097 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:20:28.684296  621097 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:20:28.688528  621097 out.go:252]   - Generating certificates and keys ...
	I1025 10:20:28.688611  621097 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:20:28.688666  621097 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:20:28.688720  621097 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:20:28.688766  621097 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:20:28.688835  621097 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:20:28.688881  621097 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:20:28.688925  621097 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:20:28.689044  621097 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-667966] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:20:28.689111  621097 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:20:28.689223  621097 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-667966] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:20:28.689297  621097 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:20:28.689401  621097 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:20:28.689469  621097 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:20:28.689557  621097 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:20:28.689639  621097 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:20:28.689728  621097 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:20:28.689811  621097 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:20:28.689901  621097 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:20:28.689989  621097 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:20:28.690121  621097 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:20:28.690215  621097 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:20:28.691995  621097 out.go:252]   - Booting up control plane ...
	I1025 10:20:28.692112  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:20:28.692207  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:20:28.692290  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:20:28.692454  621097 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:20:28.692597  621097 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:20:28.692781  621097 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:20:28.692909  621097 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:20:28.692983  621097 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:20:28.693124  621097 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:20:28.693209  621097 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:20:28.693263  621097 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 505.345924ms
	I1025 10:20:28.693406  621097 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:20:28.693520  621097 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:20:28.693632  621097 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:20:28.693745  621097 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:20:28.693848  621097 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.855252313s
	I1025 10:20:28.693938  621097 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.449590605s
	I1025 10:20:28.694035  621097 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501140692s
	I1025 10:20:28.694201  621097 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:20:28.694408  621097 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:20:28.694459  621097 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:20:28.694719  621097 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-667966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:20:28.694815  621097 kubeadm.go:318] [bootstrap-token] Using token: a7ffqx.vn3kytu0edce2nju
	I1025 10:20:28.696404  621097 out.go:252]   - Configuring RBAC rules ...
	I1025 10:20:28.696521  621097 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:20:28.696638  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:20:28.696841  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:20:28.697023  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:20:28.697209  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:20:28.697373  621097 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:20:28.697489  621097 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:20:28.697532  621097 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:20:28.697570  621097 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:20:28.697576  621097 kubeadm.go:318] 
	I1025 10:20:28.697634  621097 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:20:28.697640  621097 kubeadm.go:318] 
	I1025 10:20:28.697702  621097 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:20:28.697707  621097 kubeadm.go:318] 
	I1025 10:20:28.697727  621097 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:20:28.697779  621097 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:20:28.697820  621097 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:20:28.697825  621097 kubeadm.go:318] 
	I1025 10:20:28.697872  621097 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:20:28.697878  621097 kubeadm.go:318] 
	I1025 10:20:28.697921  621097 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:20:28.697929  621097 kubeadm.go:318] 
	I1025 10:20:28.697972  621097 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:20:28.698046  621097 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:20:28.698116  621097 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:20:28.698123  621097 kubeadm.go:318] 
	I1025 10:20:28.698207  621097 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:20:28.698307  621097 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:20:28.698314  621097 kubeadm.go:318] 
	I1025 10:20:28.698460  621097 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a7ffqx.vn3kytu0edce2nju \
	I1025 10:20:28.698596  621097 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:20:28.698645  621097 kubeadm.go:318] 	--control-plane 
	I1025 10:20:28.698660  621097 kubeadm.go:318] 
	I1025 10:20:28.698736  621097 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:20:28.698742  621097 kubeadm.go:318] 
	I1025 10:20:28.698805  621097 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a7ffqx.vn3kytu0edce2nju \
	I1025 10:20:28.698920  621097 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:20:28.698939  621097 cni.go:84] Creating CNI manager for ""
	I1025 10:20:28.698949  621097 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:28.700649  621097 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:20:28.340039  624632 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:20:28.344750  624632 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:20:28.344782  624632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:20:28.344794  624632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:20:28.344852  624632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:20:28.344924  624632 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:20:28.345014  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:20:28.354165  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:28.376400  624632 start.go:296] duration metric: took 164.413023ms for postStartSetup
	I1025 10:20:28.376511  624632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:20:28.376561  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.396278  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.499237  624632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:20:28.505254  624632 fix.go:56] duration metric: took 4.897772727s for fixHost
	I1025 10:20:28.505282  624632 start.go:83] releasing machines lock for "old-k8s-version-714798", held for 4.897835155s
	I1025 10:20:28.505364  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:28.525461  624632 ssh_runner.go:195] Run: cat /version.json
	I1025 10:20:28.525531  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.525548  624632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:20:28.525627  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.546453  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.546847  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.706337  624632 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:28.713808  624632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:20:28.756282  624632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:20:28.762374  624632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:20:28.762449  624632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:28.772619  624632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:20:28.772659  624632 start.go:495] detecting cgroup driver to use...
	I1025 10:20:28.772697  624632 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:20:28.772751  624632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:28.791535  624632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:28.808988  624632 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:28.809059  624632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:28.828558  624632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:28.846436  624632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:28.957533  624632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:29.097529  624632 docker.go:234] disabling docker service ...
	I1025 10:20:29.097602  624632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:29.117544  624632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:29.134925  624632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:29.247635  624632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:29.360206  624632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:29.376166  624632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:29.396952  624632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:20:29.397006  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.407930  624632 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:29.408000  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.419235  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.430133  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.440543  624632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:29.451373  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.462502  624632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.473297  624632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:29.484426  624632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:29.494052  624632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:29.502966  624632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:29.603708  624632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:29.747929  624632 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:29.748008  624632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:29.753943  624632 start.go:563] Will wait 60s for crictl version
	I1025 10:20:29.754044  624632 ssh_runner.go:195] Run: which crictl
	I1025 10:20:29.759141  624632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:29.795398  624632 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:29.795496  624632 ssh_runner.go:195] Run: crio --version
	I1025 10:20:29.832578  624632 ssh_runner.go:195] Run: crio --version
	I1025 10:20:29.869113  624632 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1025 10:20:27.573207  613485 node_ready.go:57] node "default-k8s-diff-port-767846" has "Ready":"False" status (will retry)
	W1025 10:20:29.573898  613485 node_ready.go:57] node "default-k8s-diff-port-767846" has "Ready":"False" status (will retry)
	I1025 10:20:29.870408  624632 cli_runner.go:164] Run: docker network inspect old-k8s-version-714798 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:20:29.890757  624632 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:20:29.896204  624632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:29.908798  624632 kubeadm.go:883] updating cluster {Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:20:29.908975  624632 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:20:29.909049  624632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:29.948969  624632 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:29.948992  624632 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:20:29.949047  624632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:29.983973  624632 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:29.984004  624632 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:20:29.984014  624632 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:20:29.984152  624632 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-714798 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:20:29.984242  624632 ssh_runner.go:195] Run: crio config
	I1025 10:20:30.041641  624632 cni.go:84] Creating CNI manager for ""
	I1025 10:20:30.041665  624632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:30.041683  624632 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:20:30.041714  624632 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-714798 NodeName:old-k8s-version-714798 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:20:30.041914  624632 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-714798"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:20:30.041998  624632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:20:30.051672  624632 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:20:30.051736  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:20:30.061266  624632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:20:30.076274  624632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:20:30.092043  624632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1025 10:20:30.108778  624632 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:20:30.113960  624632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:30.127093  624632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:30.235093  624632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:30.268407  624632 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798 for IP: 192.168.85.2
	I1025 10:20:30.268433  624632 certs.go:195] generating shared ca certs ...
	I1025 10:20:30.268455  624632 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:30.268611  624632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:20:30.268650  624632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:20:30.268660  624632 certs.go:257] generating profile certs ...
	I1025 10:20:30.268738  624632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/client.key
	I1025 10:20:30.268816  624632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/apiserver.key.67fed23c
	I1025 10:20:30.268870  624632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/proxy-client.key
	I1025 10:20:30.269012  624632 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:20:30.269054  624632 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:20:30.269067  624632 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:20:30.269095  624632 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:20:30.269133  624632 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:20:30.269161  624632 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:20:30.269227  624632 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:30.269973  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:20:30.293130  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:20:30.315522  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:20:30.336923  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:20:30.365612  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:20:30.389405  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:20:30.411522  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:20:30.434615  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:20:30.457141  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:20:30.481198  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:20:30.503152  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:20:30.526676  624632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:20:30.543578  624632 ssh_runner.go:195] Run: openssl version
	I1025 10:20:30.552354  624632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:20:30.564113  624632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:30.568723  624632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:30.568786  624632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:30.611554  624632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:20:30.623038  624632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:20:30.634244  624632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:20:30.639347  624632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:20:30.639411  624632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:20:30.687595  624632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:20:30.697003  624632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:20:30.707602  624632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:20:30.712424  624632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:20:30.712492  624632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:20:30.753115  624632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:20:30.763207  624632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:20:30.768182  624632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:20:30.820360  624632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:20:30.873756  624632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:20:30.928124  624632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:20:30.996263  624632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:20:31.051200  624632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:20:31.100464  624632 kubeadm.go:400] StartCluster: {Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:31.100579  624632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:20:31.100643  624632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:20:31.139868  624632 cri.go:89] found id: "5538d92e1ad00d0b895ea0869e732ceaf8db5758c6940c69bb5d41a8e0661704"
	I1025 10:20:31.139899  624632 cri.go:89] found id: "bbd6a05e151245b4f918254624d45abfaa66832cc221e776d8265d0e8fa29750"
	I1025 10:20:31.139906  624632 cri.go:89] found id: "ce12ceda5c77bef4710f4a8f8a5a88ca899e512d3d2151b06751ca05f3184af3"
	I1025 10:20:31.139911  624632 cri.go:89] found id: "b25eb7cda6de2aff244793687094ba7b3ca70cb7a03ef1adb707e0d582e0580e"
	I1025 10:20:31.139915  624632 cri.go:89] found id: ""
	I1025 10:20:31.139964  624632 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:20:31.160620  624632 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:31Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:20:31.160703  624632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:20:31.173370  624632 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:20:31.173394  624632 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:20:31.173442  624632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:20:31.187792  624632 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:20:31.188707  624632 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-714798" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:31.189219  624632 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-714798" cluster setting kubeconfig missing "old-k8s-version-714798" context setting]
	I1025 10:20:31.190132  624632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:31.192398  624632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:20:31.203861  624632 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:20:31.203910  624632 kubeadm.go:601] duration metric: took 30.508981ms to restartPrimaryControlPlane
	I1025 10:20:31.203928  624632 kubeadm.go:402] duration metric: took 103.474947ms to StartCluster
	I1025 10:20:31.203951  624632 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:31.204039  624632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:31.205680  624632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:31.206021  624632 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:31.206151  624632 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:31.206254  624632 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-714798"
	I1025 10:20:31.206274  624632 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-714798"
	I1025 10:20:31.206279  624632 addons.go:69] Setting dashboard=true in profile "old-k8s-version-714798"
	W1025 10:20:31.206286  624632 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:20:31.206283  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:31.206293  624632 addons.go:238] Setting addon dashboard=true in "old-k8s-version-714798"
	I1025 10:20:31.206329  624632 host.go:66] Checking if "old-k8s-version-714798" exists ...
	I1025 10:20:31.206311  624632 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-714798"
	I1025 10:20:31.206369  624632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-714798"
	W1025 10:20:31.206313  624632 addons.go:247] addon dashboard should already be in state true
	I1025 10:20:31.206479  624632 host.go:66] Checking if "old-k8s-version-714798" exists ...
	I1025 10:20:31.206698  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:31.206887  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:31.206971  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:31.208224  624632 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:31.209983  624632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:31.235747  624632 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:20:31.237085  624632 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:20:31.238369  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:20:31.238394  624632 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:20:31.238478  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:31.240080  624632 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-714798"
	W1025 10:20:31.240102  624632 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:20:31.240132  624632 host.go:66] Checking if "old-k8s-version-714798" exists ...
	I1025 10:20:31.240617  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:31.248653  624632 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:28.702207  621097 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:20:28.707264  621097 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:20:28.707286  621097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:20:28.723039  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:20:28.981009  621097 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:20:28.981145  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:28.981192  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-667966 minikube.k8s.io/updated_at=2025_10_25T10_20_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=newest-cni-667966 minikube.k8s.io/primary=true
	I1025 10:20:29.012041  621097 ops.go:34] apiserver oom_adj: -16
	I1025 10:20:29.093784  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:29.594443  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:30.094122  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:30.594532  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:31.094240  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:31.593913  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:31.249915  624632 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:31.249942  624632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:31.250488  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:31.270485  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:31.276875  624632 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:31.276921  624632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:31.276997  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:31.291160  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:31.317520  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:31.408835  624632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:31.428404  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:20:31.428434  624632 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:20:31.429366  624632 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-714798" to be "Ready" ...
	I1025 10:20:31.433173  624632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:31.448204  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:20:31.448234  624632 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:20:31.459154  624632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:31.470428  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:20:31.470449  624632 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:20:31.496403  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:20:31.496432  624632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:20:31.521598  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:20:31.521630  624632 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:20:31.545114  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:20:31.545143  624632 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:20:31.567176  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:20:31.567202  624632 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:20:31.584467  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:20:31.584502  624632 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:20:31.603586  624632 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:31.603656  624632 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:20:31.621591  624632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:33.096080  624632 node_ready.go:49] node "old-k8s-version-714798" is "Ready"
	I1025 10:20:33.096124  624632 node_ready.go:38] duration metric: took 1.666725842s for node "old-k8s-version-714798" to be "Ready" ...
	I1025 10:20:33.096143  624632 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:33.096200  624632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:32.093972  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:32.594505  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:33.093929  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:33.594532  621097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:33.707111  621097 kubeadm.go:1113] duration metric: took 4.726035617s to wait for elevateKubeSystemPrivileges
	I1025 10:20:33.707147  621097 kubeadm.go:402] duration metric: took 15.535669128s to StartCluster
	I1025 10:20:33.707173  621097 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:33.707241  621097 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:33.710412  621097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:33.710782  621097 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:33.710865  621097 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:20:33.711083  621097 config.go:182] Loaded profile config "newest-cni-667966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:33.711097  621097 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:33.711537  621097 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-667966"
	I1025 10:20:33.711558  621097 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-667966"
	I1025 10:20:33.711590  621097 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:33.712139  621097 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:33.713074  621097 addons.go:69] Setting default-storageclass=true in profile "newest-cni-667966"
	I1025 10:20:33.713131  621097 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-667966"
	I1025 10:20:33.713574  621097 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:33.716011  621097 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:33.719977  621097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:33.753270  621097 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:33.755016  621097 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:33.755054  621097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:33.755122  621097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:33.755463  621097 addons.go:238] Setting addon default-storageclass=true in "newest-cni-667966"
	I1025 10:20:33.755517  621097 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:33.756098  621097 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:33.783878  621097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:33.799537  621097 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:33.799565  621097 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:33.799640  621097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:33.834432  621097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:33.885659  621097 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:20:33.980889  621097 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:34.014238  621097 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:34.074866  621097 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:34.235197  621097 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 10:20:34.238514  621097 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:34.238600  621097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:34.531477  621097 api_server.go:72] duration metric: took 818.547835ms to wait for apiserver process to appear ...
	I1025 10:20:34.531517  621097 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:34.531541  621097 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:34.540260  621097 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:20:34.542740  621097 api_server.go:141] control plane version: v1.34.1
	I1025 10:20:34.542776  621097 api_server.go:131] duration metric: took 11.253036ms to wait for apiserver health ...
	I1025 10:20:34.542788  621097 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:20:34.548718  621097 system_pods.go:59] 8 kube-system pods found
	I1025 10:20:34.548818  621097 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:20:34.183210  624632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.749996162s)
	I1025 10:20:34.183481  624632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.724291971s)
	I1025 10:20:34.705170  624632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.083520116s)
	I1025 10:20:34.705176  624632 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.60895573s)
	I1025 10:20:34.705369  624632 api_server.go:72] duration metric: took 3.499312083s to wait for apiserver process to appear ...
	I1025 10:20:34.705386  624632 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:34.705410  624632 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:20:34.707284  624632 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-714798 addons enable metrics-server
	
	I1025 10:20:34.709416  624632 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:20:34.548997  621097 system_pods.go:61] "coredns-66bc5c9577-r94h4" [2115a28b-31dc-4c2c-92cc-673a27e36bbf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:34.549029  621097 system_pods.go:61] "etcd-newest-cni-667966" [11d44ba6-f334-4879-aa97-64a7a7607270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:20:34.549051  621097 system_pods.go:61] "kindnet-srprb" [02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:20:34.549067  621097 system_pods.go:61] "kube-apiserver-newest-cni-667966" [5cec7e59-41bf-413f-a61f-f10bb6663011] Running
	I1025 10:20:34.549098  621097 system_pods.go:61] "kube-controller-manager-newest-cni-667966" [ff16c3cb-b8d1-4823-a897-47d3d0e58335] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:20:34.549122  621097 system_pods.go:61] "kube-proxy-vngwv" [273b5cf5-0600-4009-bab3-06b3a900b02d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:20:34.549141  621097 system_pods.go:61] "kube-scheduler-newest-cni-667966" [9aac2144-6942-4b66-9a48-0defb4aba756] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:20:34.549157  621097 system_pods.go:61] "storage-provisioner" [bd681a48-b157-41ff-b49f-5189827996b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:34.549175  621097 system_pods.go:74] duration metric: took 6.379073ms to wait for pod list to return data ...
	I1025 10:20:34.549227  621097 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:20:34.550986  621097 addons.go:514] duration metric: took 839.880225ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:20:34.552723  621097 default_sa.go:45] found service account: "default"
	I1025 10:20:34.552827  621097 default_sa.go:55] duration metric: took 3.56784ms for default service account to be created ...
	I1025 10:20:34.552855  621097 kubeadm.go:586] duration metric: took 839.937452ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:20:34.552902  621097 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:20:34.557421  621097 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:20:34.557454  621097 node_conditions.go:123] node cpu capacity is 8
	I1025 10:20:34.557472  621097 node_conditions.go:105] duration metric: took 4.562438ms to run NodePressure ...
	I1025 10:20:34.557487  621097 start.go:241] waiting for startup goroutines ...
	I1025 10:20:34.741157  621097 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-667966" context rescaled to 1 replicas
	I1025 10:20:34.741204  621097 start.go:246] waiting for cluster config update ...
	I1025 10:20:34.741219  621097 start.go:255] writing updated cluster config ...
	I1025 10:20:34.741616  621097 ssh_runner.go:195] Run: rm -f paused
	I1025 10:20:34.802897  621097 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:20:34.805574  621097 out.go:179] * Done! kubectl is now configured to use "newest-cni-667966" cluster and "default" namespace by default
	W1025 10:20:31.574122  613485 node_ready.go:57] node "default-k8s-diff-port-767846" has "Ready":"False" status (will retry)
	I1025 10:20:33.573860  613485 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:20:33.573896  613485 node_ready.go:38] duration metric: took 10.503891631s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:20:33.573916  613485 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:33.573975  613485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:33.589980  613485 api_server.go:72] duration metric: took 10.900887082s to wait for apiserver process to appear ...
	I1025 10:20:33.590010  613485 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:33.590033  613485 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:20:33.605095  613485 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:20:33.608796  613485 api_server.go:141] control plane version: v1.34.1
	I1025 10:20:33.608912  613485 api_server.go:131] duration metric: took 18.892484ms to wait for apiserver health ...
	I1025 10:20:33.608934  613485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:20:33.613728  613485 system_pods.go:59] 8 kube-system pods found
	I1025 10:20:33.613790  613485 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:20:33.613798  613485 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running
	I1025 10:20:33.613807  613485 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:20:33.613816  613485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running
	I1025 10:20:33.613841  613485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running
	I1025 10:20:33.613851  613485 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:20:33.613856  613485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running
	I1025 10:20:33.613862  613485 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:20:33.613872  613485 system_pods.go:74] duration metric: took 4.929488ms to wait for pod list to return data ...
	I1025 10:20:33.613883  613485 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:20:33.617396  613485 default_sa.go:45] found service account: "default"
	I1025 10:20:33.617428  613485 default_sa.go:55] duration metric: took 3.536901ms for default service account to be created ...
	I1025 10:20:33.617440  613485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:20:33.621108  613485 system_pods.go:86] 8 kube-system pods found
	I1025 10:20:33.621140  613485 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:20:33.621149  613485 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running
	I1025 10:20:33.621156  613485 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:20:33.621159  613485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running
	I1025 10:20:33.621162  613485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running
	I1025 10:20:33.621168  613485 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:20:33.621171  613485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running
	I1025 10:20:33.621176  613485 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:20:33.621210  613485 retry.go:31] will retry after 283.719782ms: missing components: kube-dns
	I1025 10:20:33.929063  613485 system_pods.go:86] 8 kube-system pods found
	I1025 10:20:33.929109  613485 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:20:33.929120  613485 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running
	I1025 10:20:33.929129  613485 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:20:33.929135  613485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running
	I1025 10:20:33.929140  613485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running
	I1025 10:20:33.929145  613485 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:20:33.929151  613485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running
	I1025 10:20:33.929580  613485 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:20:33.929614  613485 retry.go:31] will retry after 268.429996ms: missing components: kube-dns
	I1025 10:20:34.208165  613485 system_pods.go:86] 8 kube-system pods found
	I1025 10:20:34.208297  613485 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:20:34.208310  613485 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running
	I1025 10:20:34.208391  613485 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:20:34.208398  613485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running
	I1025 10:20:34.208404  613485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running
	I1025 10:20:34.208410  613485 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:20:34.208454  613485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running
	I1025 10:20:34.208473  613485 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:20:34.208497  613485 retry.go:31] will retry after 468.305806ms: missing components: kube-dns
	I1025 10:20:34.681619  613485 system_pods.go:86] 8 kube-system pods found
	I1025 10:20:34.681655  613485 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running
	I1025 10:20:34.681661  613485 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running
	I1025 10:20:34.681666  613485 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:20:34.681669  613485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running
	I1025 10:20:34.681673  613485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running
	I1025 10:20:34.681676  613485 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:20:34.681679  613485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running
	I1025 10:20:34.681682  613485 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running
	I1025 10:20:34.681691  613485 system_pods.go:126] duration metric: took 1.064244463s to wait for k8s-apps to be running ...
	I1025 10:20:34.681698  613485 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:20:34.681743  613485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:20:34.701650  613485 system_svc.go:56] duration metric: took 19.940255ms WaitForService to wait for kubelet
	I1025 10:20:34.701685  613485 kubeadm.go:586] duration metric: took 12.012600808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:20:34.701710  613485 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:20:34.705477  613485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:20:34.705506  613485 node_conditions.go:123] node cpu capacity is 8
	I1025 10:20:34.705526  613485 node_conditions.go:105] duration metric: took 3.810818ms to run NodePressure ...
	I1025 10:20:34.705543  613485 start.go:241] waiting for startup goroutines ...
	I1025 10:20:34.705556  613485 start.go:246] waiting for cluster config update ...
	I1025 10:20:34.705573  613485 start.go:255] writing updated cluster config ...
	I1025 10:20:34.705899  613485 ssh_runner.go:195] Run: rm -f paused
	I1025 10:20:34.711209  613485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:20:34.715989  613485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:34.722119  613485 pod_ready.go:94] pod "coredns-66bc5c9577-rznxv" is "Ready"
	I1025 10:20:34.722148  613485 pod_ready.go:86] duration metric: took 6.126207ms for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:34.724846  613485 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:34.730186  613485 pod_ready.go:94] pod "etcd-default-k8s-diff-port-767846" is "Ready"
	I1025 10:20:34.730222  613485 pod_ready.go:86] duration metric: took 5.342307ms for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:34.732691  613485 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:34.738549  613485 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-767846" is "Ready"
	I1025 10:20:34.738583  613485 pod_ready.go:86] duration metric: took 5.865847ms for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:34.741640  613485 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:20:35.116365  613485 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-767846" is "Ready"
	I1025 10:20:35.116401  613485 pod_ready.go:86] duration metric: took 374.737943ms for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.064150653Z" level=info msg="Running pod sandbox: kube-system/kindnet-srprb/POD" id=336b90a0-7de3-42ae-b002-2081b728e3ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.06640913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.066129358Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.071402217Z" level=info msg="Ran pod sandbox 74d39d94e520eb24b91df5774f090a5e6c129e86ac1b40c54ab143b25d923f12 with infra container: kube-system/kube-proxy-vngwv/POD" id=aa30d04d-5f77-4aa9-bcda-6bba2f28fc49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.072557214Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=336b90a0-7de3-42ae-b002-2081b728e3ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.074248703Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0bb2410d-0710-435f-b320-408252c49cf2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.07619016Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.077045124Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6ebb9e3c-04a3-4724-bc40-cd0508556850 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.077189515Z" level=info msg="Ran pod sandbox caa102861fb76234eae3146f3588679532b9300388b848733ecf8e7497287202 with infra container: kube-system/kindnet-srprb/POD" id=336b90a0-7de3-42ae-b002-2081b728e3ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.081531732Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=61048e2f-8ce3-4d6c-8d81-f9eaef3c3863 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.084431437Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1e5068d5-64cc-420b-b9c4-ebfd0c547e08 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.084665432Z" level=info msg="Creating container: kube-system/kube-proxy-vngwv/kube-proxy" id=473a0fd3-dc78-4e1a-ae4f-84bd9a95d876 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.085555807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.093396567Z" level=info msg="Creating container: kube-system/kindnet-srprb/kindnet-cni" id=8192f801-14b6-4ff1-a12b-c3b263be5821 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.093575602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.099952433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.101341431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.10397679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.104618262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.158171717Z" level=info msg="Created container 452eac241eb6201570a6509cfe310612e1c6ae63fd78b7041323195df12f7506: kube-system/kindnet-srprb/kindnet-cni" id=8192f801-14b6-4ff1-a12b-c3b263be5821 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.15973203Z" level=info msg="Starting container: 452eac241eb6201570a6509cfe310612e1c6ae63fd78b7041323195df12f7506" id=021c0e41-a102-4b7e-8238-f07ca41d7848 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.167774123Z" level=info msg="Started container" PID=1601 containerID=452eac241eb6201570a6509cfe310612e1c6ae63fd78b7041323195df12f7506 description=kube-system/kindnet-srprb/kindnet-cni id=021c0e41-a102-4b7e-8238-f07ca41d7848 name=/runtime.v1.RuntimeService/StartContainer sandboxID=caa102861fb76234eae3146f3588679532b9300388b848733ecf8e7497287202
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.178519809Z" level=info msg="Created container dc6e8e30b67cfc5b851a33d8d026a2d1ce038acbe4e4a170617d8be8f76dea54: kube-system/kube-proxy-vngwv/kube-proxy" id=473a0fd3-dc78-4e1a-ae4f-84bd9a95d876 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.183465675Z" level=info msg="Starting container: dc6e8e30b67cfc5b851a33d8d026a2d1ce038acbe4e4a170617d8be8f76dea54" id=69de5e7c-117a-476e-8ab1-b3f12a5dcfe7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:34 newest-cni-667966 crio[771]: time="2025-10-25T10:20:34.189632651Z" level=info msg="Started container" PID=1602 containerID=dc6e8e30b67cfc5b851a33d8d026a2d1ce038acbe4e4a170617d8be8f76dea54 description=kube-system/kube-proxy-vngwv/kube-proxy id=69de5e7c-117a-476e-8ab1-b3f12a5dcfe7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74d39d94e520eb24b91df5774f090a5e6c129e86ac1b40c54ab143b25d923f12
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	452eac241eb62       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   caa102861fb76       kindnet-srprb                               kube-system
	dc6e8e30b67cf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   74d39d94e520e       kube-proxy-vngwv                            kube-system
	36facf56ee9b2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   8561152d07fd3       etcd-newest-cni-667966                      kube-system
	2fa25889a284e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   e56a4047570e0       kube-apiserver-newest-cni-667966            kube-system
	52cdfd910599f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   4a976df2304ec       kube-controller-manager-newest-cni-667966   kube-system
	e5a7770291a43       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   6f0995f66a96b       kube-scheduler-newest-cni-667966            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-667966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-667966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=newest-cni-667966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-667966
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:20:28 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-667966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                276bfa54-9db8-48b4-86d5-3278d4455526
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-667966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-srprb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-667966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-667966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-vngwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-667966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 9s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-667966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-667966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-667966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-667966 event: Registered Node newest-cni-667966 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [36facf56ee9b21ee0988e18011ed27638926d2221ecb730c1d878b2e118ffbe8] <==
	{"level":"warn","ts":"2025-10-25T10:20:24.502246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.509695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.518391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.526252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.533082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.539555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.547132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.554751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.562172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.569222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.578461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.586236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.594863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.602980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.610447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.617508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.626629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.632960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.647735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.655220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.663243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.676187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.683433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.690457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:24.745177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:20:36 up  2:03,  0 user,  load average: 6.08, 4.97, 5.98
	Linux newest-cni-667966 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [452eac241eb6201570a6509cfe310612e1c6ae63fd78b7041323195df12f7506] <==
	I1025 10:20:34.450804       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:34.452989       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 10:20:34.454198       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:34.454793       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:34.455427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1025 10:20:34.683851       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:20:34.683922       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:34.683932       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:34.683942       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:20:34.749117       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:20:34.749386       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	
	
	==> kube-apiserver [2fa25889a284e0a22c3af113c7e5dd6795b883bee0595ff9c7fc43b97069558a] <==
	I1025 10:20:25.253178       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:25.253190       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:25.253422       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:20:25.254509       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:20:25.257034       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:20:25.266547       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:20:25.270224       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:25.448005       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:26.156938       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:20:26.161049       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:20:26.161067       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:26.759663       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:26.810138       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:26.963087       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:20:26.972940       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1025 10:20:26.974179       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:20:26.979474       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:20:27.196034       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:28.084196       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:28.097447       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:20:28.106454       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:20:32.350491       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:32.355660       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:33.102960       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 10:20:33.161694       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [52cdfd910599fb6ff752da410d1e20fbb69845fb652d800f077062d7449b5816] <==
	I1025 10:20:32.196829       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:20:32.196849       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:20:32.196872       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:20:32.196887       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:20:32.196897       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:20:32.196907       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:20:32.202139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:20:32.202193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:20:32.202289       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:20:32.202502       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:20:32.202587       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:20:32.202619       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:20:32.202686       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:20:32.202730       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:20:32.202769       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:20:32.202778       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:20:32.202785       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:20:32.205706       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:32.210120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:32.210262       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:20:32.210291       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:20:32.215123       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-667966" podCIDRs=["10.42.0.0/24"]
	I1025 10:20:32.217968       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:20:32.229492       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:20:32.234679       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [dc6e8e30b67cfc5b851a33d8d026a2d1ce038acbe4e4a170617d8be8f76dea54] <==
	I1025 10:20:34.263095       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:34.361073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:34.461959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:34.462005       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 10:20:34.462109       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:20:34.513723       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:34.513879       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:20:34.524848       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:20:34.526007       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:20:34.526087       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:34.535010       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:20:34.535040       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:20:34.535236       1 config.go:200] "Starting service config controller"
	I1025 10:20:34.535247       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:20:34.535027       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:20:34.536068       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:20:34.536125       1 config.go:309] "Starting node config controller"
	I1025 10:20:34.536305       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:20:34.536419       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:20:34.635400       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:20:34.635426       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:20:34.636792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e5a7770291a4301fda3ee24d543e26f4150607b7bcb6f2edc52225fc6ac2f4c9] <==
	E1025 10:20:25.215035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:20:25.215053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:20:25.215302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:20:25.215684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:20:25.215710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:20:25.215743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:20:25.215743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:20:25.215827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:20:25.215924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:20:26.068827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:20:26.114582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:20:26.164789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:20:26.313468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:20:26.339681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:20:26.340740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:20:26.404989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:20:26.417650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:20:26.436165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:20:26.468620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:20:26.499858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:20:26.517341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:20:26.530559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:20:26.562867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:20:26.649314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 10:20:28.510718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:20:28 newest-cni-667966 kubelet[1319]: E1025 10:20:28.979162    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-667966\" already exists" pod="kube-system/kube-apiserver-newest-cni-667966"
	Oct 25 10:20:28 newest-cni-667966 kubelet[1319]: E1025 10:20:28.979196    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-667966\" already exists" pod="kube-system/etcd-newest-cni-667966"
	Oct 25 10:20:28 newest-cni-667966 kubelet[1319]: E1025 10:20:28.979288    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-667966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-667966"
	Oct 25 10:20:29 newest-cni-667966 kubelet[1319]: I1025 10:20:29.049835    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-667966" podStartSLOduration=1.049810561 podStartE2EDuration="1.049810561s" podCreationTimestamp="2025-10-25 10:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:29.039022205 +0000 UTC m=+1.197759311" watchObservedRunningTime="2025-10-25 10:20:29.049810561 +0000 UTC m=+1.208547651"
	Oct 25 10:20:29 newest-cni-667966 kubelet[1319]: I1025 10:20:29.060639    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-667966" podStartSLOduration=1.06061625 podStartE2EDuration="1.06061625s" podCreationTimestamp="2025-10-25 10:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:29.049794277 +0000 UTC m=+1.208531368" watchObservedRunningTime="2025-10-25 10:20:29.06061625 +0000 UTC m=+1.219353344"
	Oct 25 10:20:29 newest-cni-667966 kubelet[1319]: I1025 10:20:29.072180    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-667966" podStartSLOduration=1.072152718 podStartE2EDuration="1.072152718s" podCreationTimestamp="2025-10-25 10:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:29.061256794 +0000 UTC m=+1.219993904" watchObservedRunningTime="2025-10-25 10:20:29.072152718 +0000 UTC m=+1.230889811"
	Oct 25 10:20:29 newest-cni-667966 kubelet[1319]: I1025 10:20:29.086450    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-667966" podStartSLOduration=2.086429976 podStartE2EDuration="2.086429976s" podCreationTimestamp="2025-10-25 10:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:29.072920633 +0000 UTC m=+1.231657744" watchObservedRunningTime="2025-10-25 10:20:29.086429976 +0000 UTC m=+1.245167068"
	Oct 25 10:20:32 newest-cni-667966 kubelet[1319]: I1025 10:20:32.293639    1319 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:20:32 newest-cni-667966 kubelet[1319]: I1025 10:20:32.294551    1319 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165205    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/273b5cf5-0600-4009-bab3-06b3a900b02d-lib-modules\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165262    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzrk\" (UniqueName: \"kubernetes.io/projected/273b5cf5-0600-4009-bab3-06b3a900b02d-kube-api-access-dhzrk\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165290    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-lib-modules\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165357    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/273b5cf5-0600-4009-bab3-06b3a900b02d-kube-proxy\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165380    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/273b5cf5-0600-4009-bab3-06b3a900b02d-xtables-lock\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165399    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-cni-cfg\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165426    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-xtables-lock\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: I1025 10:20:33.165788    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phrh2\" (UniqueName: \"kubernetes.io/projected/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-kube-api-access-phrh2\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: E1025 10:20:33.274053    1319 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: E1025 10:20:33.274096    1319 projected.go:196] Error preparing data for projected volume kube-api-access-phrh2 for pod kube-system/kindnet-srprb: configmap "kube-root-ca.crt" not found
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: E1025 10:20:33.274196    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-kube-api-access-phrh2 podName:02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb nodeName:}" failed. No retries permitted until 2025-10-25 10:20:33.774158433 +0000 UTC m=+5.932895527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-phrh2" (UniqueName: "kubernetes.io/projected/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-kube-api-access-phrh2") pod "kindnet-srprb" (UID: "02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb") : configmap "kube-root-ca.crt" not found
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: E1025 10:20:33.274270    1319 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: E1025 10:20:33.274295    1319 projected.go:196] Error preparing data for projected volume kube-api-access-dhzrk for pod kube-system/kube-proxy-vngwv: configmap "kube-root-ca.crt" not found
	Oct 25 10:20:33 newest-cni-667966 kubelet[1319]: E1025 10:20:33.274383    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/273b5cf5-0600-4009-bab3-06b3a900b02d-kube-api-access-dhzrk podName:273b5cf5-0600-4009-bab3-06b3a900b02d nodeName:}" failed. No retries permitted until 2025-10-25 10:20:33.774349706 +0000 UTC m=+5.933086802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dhzrk" (UniqueName: "kubernetes.io/projected/273b5cf5-0600-4009-bab3-06b3a900b02d-kube-api-access-dhzrk") pod "kube-proxy-vngwv" (UID: "273b5cf5-0600-4009-bab3-06b3a900b02d") : configmap "kube-root-ca.crt" not found
	Oct 25 10:20:35 newest-cni-667966 kubelet[1319]: I1025 10:20:35.001624    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-srprb" podStartSLOduration=2.001597994 podStartE2EDuration="2.001597994s" podCreationTimestamp="2025-10-25 10:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:35.001482633 +0000 UTC m=+7.160219724" watchObservedRunningTime="2025-10-25 10:20:35.001597994 +0000 UTC m=+7.160335088"
	Oct 25 10:20:35 newest-cni-667966 kubelet[1319]: I1025 10:20:35.013103    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vngwv" podStartSLOduration=2.013077415 podStartE2EDuration="2.013077415s" podCreationTimestamp="2025-10-25 10:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:35.012978655 +0000 UTC m=+7.171715749" watchObservedRunningTime="2025-10-25 10:20:35.013077415 +0000 UTC m=+7.171814508"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-667966 -n newest-cni-667966
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-667966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-r94h4 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner: exit status 1 (63.057974ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-r94h4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (288.105926ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-767846 describe deploy/metrics-server -n kube-system: exit status 1 (64.593171ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-767846 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-767846
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-767846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058",
	        "Created": "2025-10-25T10:19:56.495133916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 615811,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:19:56.544244562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/hostname",
	        "HostsPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/hosts",
	        "LogPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058-json.log",
	        "Name": "/default-k8s-diff-port-767846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-767846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-767846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058",
	                "LowerDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-767846",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-767846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-767846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-767846",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-767846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6cb65e869fc48ca3e4029ea0ec7a2c1f783f8f11ae95f9be855eebc6678d1cc2",
	            "SandboxKey": "/var/run/docker/netns/6cb65e869fc4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-767846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:0f:31:f2:c3:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49994b8d670ad539016da4784c6cdaa9b9b52e8e74fc4aee0b1293b182f436c0",
	                    "EndpointID": "8ca64faf1be5ae3d63c8bbb1f51bcb4b1196bab95b1d896ff947d8adcf5e7814",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-767846",
	                        "a861cbbe8f62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-767846 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-767846 logs -n 25: (1.260430727s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-119085 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                    │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ ssh     │ -p flannel-119085 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo containerd config dump                                                                                                                                                                                                 │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo crio config                                                                                                                                                                                                            │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:20:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:20:23.300709  624632 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:20:23.301096  624632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:23.301114  624632 out.go:374] Setting ErrFile to fd 2...
	I1025 10:20:23.301122  624632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:23.301572  624632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:20:23.302262  624632 out.go:368] Setting JSON to false
	I1025 10:20:23.304299  624632 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7372,"bootTime":1761380251,"procs":417,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:20:23.304458  624632 start.go:141] virtualization: kvm guest
	I1025 10:20:23.306960  624632 out.go:179] * [old-k8s-version-714798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:20:23.308909  624632 notify.go:220] Checking for updates...
	I1025 10:20:23.309498  624632 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:20:23.311348  624632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:20:23.313341  624632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:23.315424  624632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:20:23.317047  624632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:20:23.319372  624632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:20:23.322053  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:23.324462  624632 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:20:22.722087  613485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:20:22.723533  613485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.723565  613485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:22.723639  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:20:22.752475  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:20:22.759476  613485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:22.759507  613485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:22.759575  613485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:20:22.794357  613485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:20:22.832395  613485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:20:22.930076  613485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:22.934919  613485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.938143  613485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:23.068420  613485 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 10:20:23.069958  613485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:20:23.362383  613485 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:20:23.326650  624632 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:20:23.361560  624632 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:20:23.362134  624632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:23.474991  624632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 10:20:23.456103682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:23.475150  624632 docker.go:318] overlay module found
	I1025 10:20:23.476788  624632 out.go:179] * Using the docker driver based on existing profile
	I1025 10:20:23.478398  624632 start.go:305] selected driver: docker
	I1025 10:20:23.478425  624632 start.go:925] validating driver "docker" against &{Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:23.478569  624632 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:20:23.479393  624632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:23.571473  624632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 10:20:23.559687458 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:23.571948  624632 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:20:23.572037  624632 cni.go:84] Creating CNI manager for ""
	I1025 10:20:23.572109  624632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:23.572191  624632 start.go:349] cluster config:
	{Name:old-k8s-version-714798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:23.574966  624632 out.go:179] * Starting "old-k8s-version-714798" primary control-plane node in "old-k8s-version-714798" cluster
	I1025 10:20:23.576372  624632 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:20:23.577799  624632 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:20:23.579475  624632 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:20:23.579510  624632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:20:23.579535  624632 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 10:20:23.579548  624632 cache.go:58] Caching tarball of preloaded images
	I1025 10:20:23.579656  624632 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:20:23.579675  624632 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:20:23.579810  624632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/config.json ...
	I1025 10:20:23.607233  624632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:20:23.607260  624632 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:20:23.607282  624632 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:20:23.607349  624632 start.go:360] acquireMachinesLock for old-k8s-version-714798: {Name:mk97e2141704e9680122a6db3eca4557d7d2aee2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:23.607435  624632 start.go:364] duration metric: took 51.014µs to acquireMachinesLock for "old-k8s-version-714798"
	I1025 10:20:23.607461  624632 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:20:23.607471  624632 fix.go:54] fixHost starting: 
	I1025 10:20:23.607767  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:23.629577  624632 fix.go:112] recreateIfNeeded on old-k8s-version-714798: state=Stopped err=<nil>
	W1025 10:20:23.629619  624632 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:20:23.363674  613485 addons.go:514] duration metric: took 674.45378ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:20:23.574181  613485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-767846" context rescaled to 1 replicas
	W1025 10:20:25.074694  613485 node_ready.go:57] node "default-k8s-diff-port-767846" has "Ready":"False" status (will retry)
	I1025 10:20:23.631403  624632 out.go:252] * Restarting existing docker container for "old-k8s-version-714798" ...
	I1025 10:20:23.631491  624632 cli_runner.go:164] Run: docker start old-k8s-version-714798
	I1025 10:20:23.932468  624632 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:20:23.956085  624632 kic.go:430] container "old-k8s-version-714798" state is running.
	I1025 10:20:23.956547  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:23.978748  624632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/old-k8s-version-714798/config.json ...
	I1025 10:20:23.979037  624632 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:23.979124  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:24.001727  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:24.002092  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:24.002114  624632 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:24.003059  624632 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47234->127.0.0.1:33108: read: connection reset by peer
	I1025 10:20:27.149919  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714798
	
	I1025 10:20:27.149957  624632 ubuntu.go:182] provisioning hostname "old-k8s-version-714798"
	I1025 10:20:27.150022  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.169715  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.170006  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.170026  624632 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-714798 && echo "old-k8s-version-714798" | sudo tee /etc/hostname
	I1025 10:20:27.339339  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714798
	
	I1025 10:20:27.339446  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.361033  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.361258  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.361276  624632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-714798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-714798/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-714798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:27.509983  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:27.510028  624632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:27.510057  624632 ubuntu.go:190] setting up certificates
	I1025 10:20:27.510072  624632 provision.go:84] configureAuth start
	I1025 10:20:27.510153  624632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714798
	I1025 10:20:27.529756  624632 provision.go:143] copyHostCerts
	I1025 10:20:27.529844  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:27.529877  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:27.529973  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:27.530097  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:27.530106  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:27.530135  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:27.530196  624632 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:27.530203  624632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:27.530228  624632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:27.530280  624632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-714798 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-714798]
	I1025 10:20:27.651694  624632 provision.go:177] copyRemoteCerts
	I1025 10:20:27.651767  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:27.651805  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.671792  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:27.786744  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:27.810211  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:20:27.831489  624632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:27.856169  624632 provision.go:87] duration metric: took 346.080135ms to configureAuth
	I1025 10:20:27.856203  624632 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:27.856399  624632 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:20:27.856502  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:27.877756  624632 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:27.877983  624632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 10:20:27.878001  624632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:28.211904  624632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:28.211952  624632 machine.go:96] duration metric: took 4.232896794s to provisionDockerMachine
	I1025 10:20:28.211969  624632 start.go:293] postStartSetup for "old-k8s-version-714798" (driver="docker")
	I1025 10:20:28.211983  624632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:28.212062  624632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:28.212116  624632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:20:28.232261  624632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:20:28.682878  621097 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:20:28.682977  621097 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:20:28.683089  621097 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:20:28.683161  621097 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:20:28.683210  621097 kubeadm.go:318] OS: Linux
	I1025 10:20:28.683260  621097 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:20:28.683364  621097 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:20:28.683439  621097 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:20:28.683515  621097 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:20:28.683579  621097 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:20:28.683655  621097 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:20:28.683732  621097 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:20:28.683808  621097 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:20:28.683935  621097 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:20:28.684057  621097 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:20:28.684208  621097 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:20:28.684296  621097 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:20:28.688528  621097 out.go:252]   - Generating certificates and keys ...
	I1025 10:20:28.688611  621097 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:20:28.688666  621097 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:20:28.688720  621097 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:20:28.688766  621097 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:20:28.688835  621097 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:20:28.688881  621097 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:20:28.688925  621097 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:20:28.689044  621097 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-667966] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:20:28.689111  621097 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:20:28.689223  621097 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-667966] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:20:28.689297  621097 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:20:28.689401  621097 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:20:28.689469  621097 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:20:28.689557  621097 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:20:28.689639  621097 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:20:28.689728  621097 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:20:28.689811  621097 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:20:28.689901  621097 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:20:28.689989  621097 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:20:28.690121  621097 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:20:28.690215  621097 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:20:28.691995  621097 out.go:252]   - Booting up control plane ...
	I1025 10:20:28.692112  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:20:28.692207  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:20:28.692290  621097 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:20:28.692454  621097 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:20:28.692597  621097 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:20:28.692781  621097 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:20:28.692909  621097 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:20:28.692983  621097 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:20:28.693124  621097 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:20:28.693209  621097 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:20:28.693263  621097 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 505.345924ms
	I1025 10:20:28.693406  621097 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:20:28.693520  621097 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:20:28.693632  621097 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:20:28.693745  621097 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:20:28.693848  621097 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.855252313s
	I1025 10:20:28.693938  621097 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.449590605s
	I1025 10:20:28.694035  621097 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501140692s
	I1025 10:20:28.694201  621097 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:20:28.694408  621097 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:20:28.694459  621097 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:20:28.694719  621097 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-667966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:20:28.694815  621097 kubeadm.go:318] [bootstrap-token] Using token: a7ffqx.vn3kytu0edce2nju
	I1025 10:20:28.696404  621097 out.go:252]   - Configuring RBAC rules ...
	I1025 10:20:28.696521  621097 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:20:28.696638  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:20:28.696841  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:20:28.697023  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:20:28.697209  621097 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:20:28.697373  621097 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:20:28.697489  621097 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:20:28.697532  621097 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:20:28.697570  621097 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:20:28.697576  621097 kubeadm.go:318] 
	I1025 10:20:28.697634  621097 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:20:28.697640  621097 kubeadm.go:318] 
	I1025 10:20:28.697702  621097 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:20:28.697707  621097 kubeadm.go:318] 
	I1025 10:20:28.697727  621097 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:20:28.697779  621097 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	
	
	==> CRI-O <==
	Oct 25 10:20:33 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:33.973630195Z" level=info msg="Starting container: b5b30e584bb729504186339968cafc3bbdd70606e5843dfd49f6bb026027929c" id=024b0516-46fb-4bf4-a46f-5d3e3009f3ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:33 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:33.976767484Z" level=info msg="Started container" PID=1891 containerID=b5b30e584bb729504186339968cafc3bbdd70606e5843dfd49f6bb026027929c description=kube-system/coredns-66bc5c9577-rznxv/coredns id=024b0516-46fb-4bf4-a46f-5d3e3009f3ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=18db5c5c6497a3ee526bddcf1121940099de2ed291c067a99b4ec04369117df6
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.85941782Z" level=info msg="Running pod sandbox: default/busybox/POD" id=89719ff7-2ab7-4559-bc27-ce7be46c064b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.85952412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.865120617Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:166805f84fe23b48d3c1edc790a6ac8a33fb1c6b56520e90f529120676f8a6b1 UID:15f6b26e-81c5-48eb-9bd1-5674b56ca028 NetNS:/var/run/netns/421b4cab-3a33-4daf-9bd4-a9e7f311b5eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000380530}] Aliases:map[]}"
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.865160488Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.878184123Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:166805f84fe23b48d3c1edc790a6ac8a33fb1c6b56520e90f529120676f8a6b1 UID:15f6b26e-81c5-48eb-9bd1-5674b56ca028 NetNS:/var/run/netns/421b4cab-3a33-4daf-9bd4-a9e7f311b5eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000380530}] Aliases:map[]}"
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.878403194Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.879594224Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.881006799Z" level=info msg="Ran pod sandbox 166805f84fe23b48d3c1edc790a6ac8a33fb1c6b56520e90f529120676f8a6b1 with infra container: default/busybox/POD" id=89719ff7-2ab7-4559-bc27-ce7be46c064b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.882634698Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1515d365-ce3d-4836-9f95-756433b6bf7f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.882807994Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1515d365-ce3d-4836-9f95-756433b6bf7f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.882873158Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1515d365-ce3d-4836-9f95-756433b6bf7f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.88395252Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=855f9c0b-b1af-43bc-8991-113db174d7d9 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:20:36 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:36.886155417Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.961510591Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=855f9c0b-b1af-43bc-8991-113db174d7d9 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.962458682Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=113e8960-6408-4fde-8dce-aefec680e277 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.964027635Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd215484-39e4-4116-8ee4-deb41605b534 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.967894034Z" level=info msg="Creating container: default/busybox/busybox" id=d3d9712a-5250-4efb-acf7-44e53285a187 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.968027904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.97186175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.972336261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:38 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:38.999545884Z" level=info msg="Created container 4b0c1c7ef49054ae3c9fdaee12cb5bd55b0ad7bbd1b038cf9a9cdfa26d182a92: default/busybox/busybox" id=d3d9712a-5250-4efb-acf7-44e53285a187 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:39 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:39.000455407Z" level=info msg="Starting container: 4b0c1c7ef49054ae3c9fdaee12cb5bd55b0ad7bbd1b038cf9a9cdfa26d182a92" id=c50a52e9-dc3d-47e3-803e-acb30b373267 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:39 default-k8s-diff-port-767846 crio[777]: time="2025-10-25T10:20:39.00275054Z" level=info msg="Started container" PID=1964 containerID=4b0c1c7ef49054ae3c9fdaee12cb5bd55b0ad7bbd1b038cf9a9cdfa26d182a92 description=default/busybox/busybox id=c50a52e9-dc3d-47e3-803e-acb30b373267 name=/runtime.v1.RuntimeService/StartContainer sandboxID=166805f84fe23b48d3c1edc790a6ac8a33fb1c6b56520e90f529120676f8a6b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4b0c1c7ef4905       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   166805f84fe23       busybox                                                default
	b5b30e584bb72       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   18db5c5c6497a       coredns-66bc5c9577-rznxv                               kube-system
	4795d16092db4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   325d85159b8e4       storage-provisioner                                    kube-system
	bacc9d6da6d26       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   6565e0663c3dc       kindnet-vcqs2                                          kube-system
	4197ea7f66789       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   d277b8549f03f       kube-proxy-cvm5c                                       kube-system
	874ac3e1096b7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   ed52a56f2cc9c       kube-apiserver-default-k8s-diff-port-767846            kube-system
	5af76fc7765e6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   f0041cf8f27f4       etcd-default-k8s-diff-port-767846                      kube-system
	76be3af055a72       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   2431ed2887c9d       kube-scheduler-default-k8s-diff-port-767846            kube-system
	09329f6d72fd1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   77cff5b0380ba       kube-controller-manager-default-k8s-diff-port-767846   kube-system
	
	
	==> coredns [b5b30e584bb729504186339968cafc3bbdd70606e5843dfd49f6bb026027929c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32795 - 40127 "HINFO IN 5533733157043045339.9179076166454370038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.150199261s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-767846
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-767846
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=default-k8s-diff-port-767846
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-767846
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:36 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:36 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:36 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:20:36 +0000   Sat, 25 Oct 2025 10:20:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-767846
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                993ff0b7-fce7-4433-b2bb-acc59f575ba5
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-rznxv                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-767846                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-vcqs2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-767846             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-767846    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-cvm5c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-767846             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-767846 event: Registered Node default-k8s-diff-port-767846 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-767846 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [5af76fc7765e67c76b0bc82cb25515a1905cf7f44aa5dd76c7924d8879ddddc1] <==
	{"level":"warn","ts":"2025-10-25T10:20:13.252634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.263263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.272989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.280447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.288899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.299124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.306576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.314769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.322787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.334087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.344247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.355576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.361871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.368878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.380482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.388783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.398423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.406916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.414410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.423171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.431262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.446155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.455198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.464814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:13.532936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41090","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:20:46 up  2:03,  0 user,  load average: 5.37, 4.85, 5.93
	Linux default-k8s-diff-port-767846 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bacc9d6da6d26933eae3208b2ee662aca21889c701b977c76361df4b730a61df] <==
	I1025 10:20:22.963651       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:22.983587       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 10:20:22.983767       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:22.983790       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:22.983819       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:20:23.187175       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:23.187228       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:23.187240       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:20:23.187439       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:20:23.187720       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:20:23.257551       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:20:23.257746       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:20:24.987500       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:20:24.987532       1 metrics.go:72] Registering metrics
	I1025 10:20:24.987636       1 controller.go:711] "Syncing nftables rules"
	I1025 10:20:33.188707       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:20:33.188789       1 main.go:301] handling current node
	I1025 10:20:43.189254       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:20:43.189296       1 main.go:301] handling current node
	
	
	==> kube-apiserver [874ac3e1096b75b19e54d6724206264af8dd225e303ddd1480a40145b05a7062] <==
	I1025 10:20:14.067036       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:20:14.067045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:14.067052       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:14.067062       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:20:14.067106       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:20:14.069151       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:14.088200       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:20:14.960350       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:20:14.964944       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:20:14.964962       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:15.561084       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:15.607827       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:15.664467       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:20:15.671697       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1025 10:20:15.672925       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:20:15.678188       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:20:16.258024       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:16.556190       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:16.567619       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:20:16.577461       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:20:21.910123       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:20:22.111660       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:22.116949       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:22.309266       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 10:20:44.667580       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:33488: use of closed network connection
	
	
	==> kube-controller-manager [09329f6d72fd1fb8d208eea3e924d8624cd725a001d4caf3bea5cc9a53a4b24d] <==
	I1025 10:20:21.240731       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:20:21.255381       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:20:21.256513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:20:21.256547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:20:21.256601       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:20:21.256611       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:20:21.256660       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:20:21.256672       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:20:21.256706       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:20:21.256854       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:20:21.256858       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:20:21.257700       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:20:21.257715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:20:21.257753       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:20:21.257829       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:20:21.258058       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:20:21.259359       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:20:21.262622       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:20:21.263950       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:21.263959       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:21.269357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:20:21.270820       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:20:21.276080       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:21.281251       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:20:36.208186       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4197ea7f66789d225032db8a2d3bcca4c6ba37a1d089d368d94505a558b2bf5f] <==
	I1025 10:20:22.798645       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:22.882596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:22.982737       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:22.982791       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 10:20:22.982950       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:20:23.014372       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:23.014466       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:20:23.022616       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:20:23.026856       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:20:23.026963       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:23.030624       1 config.go:200] "Starting service config controller"
	I1025 10:20:23.030678       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:20:23.030749       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:20:23.030787       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:20:23.030889       1 config.go:309] "Starting node config controller"
	I1025 10:20:23.030897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:20:23.030904       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:20:23.031440       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:20:23.031452       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:20:23.130995       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:20:23.131127       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:20:23.131801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [76be3af055a72f3faff3a507253d57e90070e1260623f23e101cb1e70145d9a7] <==
	E1025 10:20:14.017231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:20:14.017344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:20:14.017436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:20:14.017513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:20:14.017510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:20:14.017576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:20:14.017606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:20:14.017622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:20:14.017686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:20:14.017700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:20:14.017845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:20:14.017865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:20:14.947937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:20:14.968211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:20:14.971260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:20:14.989531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:20:15.015414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:20:15.034032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:20:15.087157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:20:15.089117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:20:15.169823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:20:15.189152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:20:15.203593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:20:15.478459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 10:20:17.512761       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:20:17 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:17.558850    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-767846" podStartSLOduration=1.5588301740000001 podStartE2EDuration="1.558830174s" podCreationTimestamp="2025-10-25 10:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:17.55873557 +0000 UTC m=+1.239303889" watchObservedRunningTime="2025-10-25 10:20:17.558830174 +0000 UTC m=+1.239398493"
	Oct 25 10:20:17 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:17.561137    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-767846" podStartSLOduration=1.56111088 podStartE2EDuration="1.56111088s" podCreationTimestamp="2025-10-25 10:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:17.53978814 +0000 UTC m=+1.220356454" watchObservedRunningTime="2025-10-25 10:20:17.56111088 +0000 UTC m=+1.241679198"
	Oct 25 10:20:17 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:17.588421    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-767846" podStartSLOduration=1.5884003 podStartE2EDuration="1.5884003s" podCreationTimestamp="2025-10-25 10:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:17.577406847 +0000 UTC m=+1.257975185" watchObservedRunningTime="2025-10-25 10:20:17.5884003 +0000 UTC m=+1.268968619"
	Oct 25 10:20:17 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:17.600377    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-767846" podStartSLOduration=1.6003540969999999 podStartE2EDuration="1.600354097s" podCreationTimestamp="2025-10-25 10:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:17.588484406 +0000 UTC m=+1.269052729" watchObservedRunningTime="2025-10-25 10:20:17.600354097 +0000 UTC m=+1.280922408"
	Oct 25 10:20:21 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:21.274189    1356 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:20:21 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:21.274922    1356 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.343165    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7s7\" (UniqueName: \"kubernetes.io/projected/42278e98-5278-4efa-b484-ec73c16fc851-kube-api-access-bh7s7\") pod \"kube-proxy-cvm5c\" (UID: \"42278e98-5278-4efa-b484-ec73c16fc851\") " pod="kube-system/kube-proxy-cvm5c"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.343231    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42278e98-5278-4efa-b484-ec73c16fc851-kube-proxy\") pod \"kube-proxy-cvm5c\" (UID: \"42278e98-5278-4efa-b484-ec73c16fc851\") " pod="kube-system/kube-proxy-cvm5c"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.343370    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42278e98-5278-4efa-b484-ec73c16fc851-xtables-lock\") pod \"kube-proxy-cvm5c\" (UID: \"42278e98-5278-4efa-b484-ec73c16fc851\") " pod="kube-system/kube-proxy-cvm5c"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.343419    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42278e98-5278-4efa-b484-ec73c16fc851-lib-modules\") pod \"kube-proxy-cvm5c\" (UID: \"42278e98-5278-4efa-b484-ec73c16fc851\") " pod="kube-system/kube-proxy-cvm5c"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.443898    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e41fd0fd-97c4-44ef-a645-cf0136340098-lib-modules\") pod \"kindnet-vcqs2\" (UID: \"e41fd0fd-97c4-44ef-a645-cf0136340098\") " pod="kube-system/kindnet-vcqs2"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.443997    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz4k\" (UniqueName: \"kubernetes.io/projected/e41fd0fd-97c4-44ef-a645-cf0136340098-kube-api-access-7mz4k\") pod \"kindnet-vcqs2\" (UID: \"e41fd0fd-97c4-44ef-a645-cf0136340098\") " pod="kube-system/kindnet-vcqs2"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.444071    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e41fd0fd-97c4-44ef-a645-cf0136340098-xtables-lock\") pod \"kindnet-vcqs2\" (UID: \"e41fd0fd-97c4-44ef-a645-cf0136340098\") " pod="kube-system/kindnet-vcqs2"
	Oct 25 10:20:22 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:22.444091    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e41fd0fd-97c4-44ef-a645-cf0136340098-cni-cfg\") pod \"kindnet-vcqs2\" (UID: \"e41fd0fd-97c4-44ef-a645-cf0136340098\") " pod="kube-system/kindnet-vcqs2"
	Oct 25 10:20:23 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:23.505087    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vcqs2" podStartSLOduration=1.505063264 podStartE2EDuration="1.505063264s" podCreationTimestamp="2025-10-25 10:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:23.482698472 +0000 UTC m=+7.163266782" watchObservedRunningTime="2025-10-25 10:20:23.505063264 +0000 UTC m=+7.185631583"
	Oct 25 10:20:30 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:30.538752    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvm5c" podStartSLOduration=8.538729428 podStartE2EDuration="8.538729428s" podCreationTimestamp="2025-10-25 10:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:23.522780779 +0000 UTC m=+7.203349097" watchObservedRunningTime="2025-10-25 10:20:30.538729428 +0000 UTC m=+14.219297745"
	Oct 25 10:20:33 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:33.488887    1356 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:20:33 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:33.622866    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7eae20c-8d39-4486-ab11-13675911180f-config-volume\") pod \"coredns-66bc5c9577-rznxv\" (UID: \"d7eae20c-8d39-4486-ab11-13675911180f\") " pod="kube-system/coredns-66bc5c9577-rznxv"
	Oct 25 10:20:33 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:33.622939    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzl8\" (UniqueName: \"kubernetes.io/projected/d7eae20c-8d39-4486-ab11-13675911180f-kube-api-access-glzl8\") pod \"coredns-66bc5c9577-rznxv\" (UID: \"d7eae20c-8d39-4486-ab11-13675911180f\") " pod="kube-system/coredns-66bc5c9577-rznxv"
	Oct 25 10:20:33 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:33.623045    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9td2g\" (UniqueName: \"kubernetes.io/projected/06a917da-eaa2-4b50-8c56-31a0ca7d14e2-kube-api-access-9td2g\") pod \"storage-provisioner\" (UID: \"06a917da-eaa2-4b50-8c56-31a0ca7d14e2\") " pod="kube-system/storage-provisioner"
	Oct 25 10:20:33 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:33.623186    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/06a917da-eaa2-4b50-8c56-31a0ca7d14e2-tmp\") pod \"storage-provisioner\" (UID: \"06a917da-eaa2-4b50-8c56-31a0ca7d14e2\") " pod="kube-system/storage-provisioner"
	Oct 25 10:20:34 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:34.534250    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rznxv" podStartSLOduration=12.534221985 podStartE2EDuration="12.534221985s" podCreationTimestamp="2025-10-25 10:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:34.510913708 +0000 UTC m=+18.191482027" watchObservedRunningTime="2025-10-25 10:20:34.534221985 +0000 UTC m=+18.214790304"
	Oct 25 10:20:36 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:36.552525    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.552499816 podStartE2EDuration="13.552499816s" podCreationTimestamp="2025-10-25 10:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:34.558376043 +0000 UTC m=+18.238944361" watchObservedRunningTime="2025-10-25 10:20:36.552499816 +0000 UTC m=+20.233068136"
	Oct 25 10:20:36 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:36.642516    1356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcfq2\" (UniqueName: \"kubernetes.io/projected/15f6b26e-81c5-48eb-9bd1-5674b56ca028-kube-api-access-hcfq2\") pod \"busybox\" (UID: \"15f6b26e-81c5-48eb-9bd1-5674b56ca028\") " pod="default/busybox"
	Oct 25 10:20:39 default-k8s-diff-port-767846 kubelet[1356]: I1025 10:20:39.523387    1356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.443211883 podStartE2EDuration="3.523364991s" podCreationTimestamp="2025-10-25 10:20:36 +0000 UTC" firstStartedPulling="2025-10-25 10:20:36.883298858 +0000 UTC m=+20.563867170" lastFinishedPulling="2025-10-25 10:20:38.963451966 +0000 UTC m=+22.644020278" observedRunningTime="2025-10-25 10:20:39.522979694 +0000 UTC m=+23.203548009" watchObservedRunningTime="2025-10-25 10:20:39.523364991 +0000 UTC m=+23.203933307"
	
	
	==> storage-provisioner [4795d16092db45a6505228e0fef962e09591e619992144245535010b8169062f] <==
	I1025 10:20:33.967135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:20:33.995969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:20:33.996075       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:20:33.999971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:34.010608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:20:34.010903       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:20:34.012535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-767846_9aa57826-9b85-4280-aeb8-e76797f6b694!
	I1025 10:20:34.012665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfe6efb6-08ca-4115-b55d-4a5493fee922", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-767846_9aa57826-9b85-4280-aeb8-e76797f6b694 became leader
	W1025 10:20:34.022686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:34.052971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:20:34.112923       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-767846_9aa57826-9b85-4280-aeb8-e76797f6b694!
	W1025 10:20:36.056832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:36.062758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:38.066419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:38.071372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:40.075002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:40.079460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:42.083245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:42.089603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:44.093115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:44.097485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:46.100675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:20:46.106386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-667966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-667966 --alsologtostderr -v=1: exit status 80 (2.479961132s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-667966 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:20:58.944555  634907 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:20:58.944749  634907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:58.944756  634907 out.go:374] Setting ErrFile to fd 2...
	I1025 10:20:58.944763  634907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:58.945051  634907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:20:58.945452  634907 out.go:368] Setting JSON to false
	I1025 10:20:58.945506  634907 mustload.go:65] Loading cluster: newest-cni-667966
	I1025 10:20:58.946057  634907 config.go:182] Loaded profile config "newest-cni-667966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:58.946718  634907 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:58.981428  634907 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:58.981835  634907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:59.058079  634907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:90 SystemTime:2025-10-25 10:20:59.041630407 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:59.058956  634907 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-667966 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:20:59.061418  634907 out.go:179] * Pausing node newest-cni-667966 ... 
	I1025 10:20:59.062990  634907 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:59.063368  634907 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:59.063416  634907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:59.104450  634907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:59.231519  634907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:20:59.249754  634907 pause.go:52] kubelet running: true
	I1025 10:20:59.249847  634907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:20:59.441967  634907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:20:59.442468  634907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:20:59.545557  634907 cri.go:89] found id: "3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a"
	I1025 10:20:59.545586  634907 cri.go:89] found id: "b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f"
	I1025 10:20:59.545591  634907 cri.go:89] found id: "dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033"
	I1025 10:20:59.545595  634907 cri.go:89] found id: "9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b"
	I1025 10:20:59.545599  634907 cri.go:89] found id: "043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525"
	I1025 10:20:59.545604  634907 cri.go:89] found id: "d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d"
	I1025 10:20:59.545608  634907 cri.go:89] found id: ""
	I1025 10:20:59.545658  634907 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:20:59.561147  634907 retry.go:31] will retry after 352.461544ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:59Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:20:59.913769  634907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:20:59.929470  634907 pause.go:52] kubelet running: false
	I1025 10:20:59.929533  634907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:00.057217  634907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:00.057311  634907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:00.149481  634907 cri.go:89] found id: "3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a"
	I1025 10:21:00.149621  634907 cri.go:89] found id: "b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f"
	I1025 10:21:00.149628  634907 cri.go:89] found id: "dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033"
	I1025 10:21:00.149633  634907 cri.go:89] found id: "9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b"
	I1025 10:21:00.149637  634907 cri.go:89] found id: "043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525"
	I1025 10:21:00.149656  634907 cri.go:89] found id: "d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d"
	I1025 10:21:00.149660  634907 cri.go:89] found id: ""
	I1025 10:21:00.149751  634907 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:00.167892  634907 retry.go:31] will retry after 295.259842ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:00Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:00.463429  634907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:00.480462  634907 pause.go:52] kubelet running: false
	I1025 10:21:00.480527  634907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:00.637208  634907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:00.637303  634907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:00.711476  634907 cri.go:89] found id: "3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a"
	I1025 10:21:00.711498  634907 cri.go:89] found id: "b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f"
	I1025 10:21:00.711502  634907 cri.go:89] found id: "dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033"
	I1025 10:21:00.711505  634907 cri.go:89] found id: "9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b"
	I1025 10:21:00.711508  634907 cri.go:89] found id: "043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525"
	I1025 10:21:00.711518  634907 cri.go:89] found id: "d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d"
	I1025 10:21:00.711521  634907 cri.go:89] found id: ""
	I1025 10:21:00.711562  634907 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:00.725939  634907 retry.go:31] will retry after 378.707951ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:00Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:01.105672  634907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:01.121126  634907 pause.go:52] kubelet running: false
	I1025 10:21:01.121187  634907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:01.244035  634907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:01.244114  634907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:01.318201  634907 cri.go:89] found id: "3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a"
	I1025 10:21:01.318229  634907 cri.go:89] found id: "b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f"
	I1025 10:21:01.318234  634907 cri.go:89] found id: "dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033"
	I1025 10:21:01.318239  634907 cri.go:89] found id: "9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b"
	I1025 10:21:01.318243  634907 cri.go:89] found id: "043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525"
	I1025 10:21:01.318247  634907 cri.go:89] found id: "d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d"
	I1025 10:21:01.318251  634907 cri.go:89] found id: ""
	I1025 10:21:01.318298  634907 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:01.335824  634907 out.go:203] 
	W1025 10:21:01.337258  634907 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:21:01.337283  634907 out.go:285] * 
	* 
	W1025 10:21:01.341592  634907 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:01.343207  634907 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-667966 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-667966
helpers_test.go:243: (dbg) docker inspect newest-cni-667966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d",
	        "Created": "2025-10-25T10:20:12.207812957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 630323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:45.798949325Z",
	            "FinishedAt": "2025-10-25T10:20:44.797092589Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/hostname",
	        "HostsPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/hosts",
	        "LogPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d-json.log",
	        "Name": "/newest-cni-667966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-667966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-667966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d",
	                "LowerDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-667966",
	                "Source": "/var/lib/docker/volumes/newest-cni-667966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-667966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-667966",
	                "name.minikube.sigs.k8s.io": "newest-cni-667966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1aeb2edecbe5406f33625a7190e1ceef6a9cb28571a0ad5934c745b67e9ec417",
	            "SandboxKey": "/var/run/docker/netns/1aeb2edecbe5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-667966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:10:a2:cd:8a:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1607edd0e575c882979f9db63a22ad5ee1f0aabcbcf3a5dc021515221638bbcb",
	                    "EndpointID": "99df72bc9a43b07acd79ccfb6eb6d94b7e3e92a2c005004e8df61d3fe5d19e7e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-667966",
	                        "cede76718eb2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966: exit status 2 (352.84828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-667966 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-667966 logs -n 25: (1.089258867s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-119085 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo containerd config dump                                                                                                                                                                                                 │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo crio config                                                                                                                                                                                                            │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:20:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:20:48.892241  631515 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:20:48.892653  631515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:48.892669  631515 out.go:374] Setting ErrFile to fd 2...
	I1025 10:20:48.892676  631515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:48.893047  631515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:20:48.893975  631515 out.go:368] Setting JSON to false
	I1025 10:20:48.895918  631515 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7398,"bootTime":1761380251,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:20:48.896133  631515 start.go:141] virtualization: kvm guest
	I1025 10:20:48.899513  631515 out.go:179] * [no-preload-899665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:20:48.901568  631515 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:20:48.901592  631515 notify.go:220] Checking for updates...
	I1025 10:20:48.905055  631515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:20:48.907313  631515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:48.909465  631515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:20:48.910986  631515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:20:48.912379  631515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:20:48.914291  631515 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:48.914976  631515 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:20:48.949962  631515 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:20:48.950082  631515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:49.042118  631515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 10:20:49.02747325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:49.042250  631515 docker.go:318] overlay module found
	I1025 10:20:49.045154  631515 out.go:179] * Using the docker driver based on existing profile
	I1025 10:20:49.046717  631515 start.go:305] selected driver: docker
	I1025 10:20:49.046739  631515 start.go:925] validating driver "docker" against &{Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:49.046879  631515 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:20:49.047724  631515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:49.128358  631515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 10:20:49.114483022 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:49.128697  631515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:20:49.128747  631515 cni.go:84] Creating CNI manager for ""
	I1025 10:20:49.128791  631515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:49.128835  631515 start.go:349] cluster config:
	{Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:49.132098  631515 out.go:179] * Starting "no-preload-899665" primary control-plane node in "no-preload-899665" cluster
	I1025 10:20:49.133686  631515 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:20:49.135734  631515 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:20:49.137151  631515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:49.137270  631515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:20:49.137291  631515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/config.json ...
	I1025 10:20:49.137591  631515 cache.go:107] acquiring lock: {Name:mk40b6df814b6b5925975339c490eaa473a6de34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137678  631515 cache.go:107] acquiring lock: {Name:mk598afb8705e91839dae1d4a2c6bc154c20ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137631  631515 cache.go:107] acquiring lock: {Name:mkca7e8f698c00a2dded053258d11cb559d4a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137602  631515 cache.go:107] acquiring lock: {Name:mk87b9b51f951a49c1140ff827e752119366fce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137763  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:20:49.137767  631515 cache.go:107] acquiring lock: {Name:mk5e595f9203d1fc28a17a4a355a91fb1aaa2600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137783  631515 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.085µs
	I1025 10:20:49.137794  631515 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:20:49.137799  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:20:49.137814  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:20:49.137817  631515 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 156.703µs
	I1025 10:20:49.137826  631515 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 238.187µs
	I1025 10:20:49.137822  631515 cache.go:107] acquiring lock: {Name:mkc1b890852e9d05ce9fc035ad71487b8b862e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137836  631515 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:20:49.137834  631515 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:20:49.137821  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:20:49.137848  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:20:49.137853  631515 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 267.096µs
	I1025 10:20:49.137876  631515 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:20:49.137878  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:20:49.137890  631515 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 75.695µs
	I1025 10:20:49.137859  631515 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 94.988µs
	I1025 10:20:49.137602  631515 cache.go:107] acquiring lock: {Name:mk66fb8d1501241cf6467abb2c486b29aeb41ec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137911  631515 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:20:49.137901  631515 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:20:49.137698  631515 cache.go:107] acquiring lock: {Name:mka703b719c5bb116e1b09495d013db0ad942e12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137982  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:20:49.137997  631515 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 413.277µs
	I1025 10:20:49.138006  631515 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:20:49.138026  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:20:49.138040  631515 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 354.693µs
	I1025 10:20:49.138055  631515 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:20:49.138065  631515 cache.go:87] Successfully saved all images to host disk.
	I1025 10:20:49.166217  631515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:20:49.166239  631515 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:20:49.166258  631515 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:20:49.166284  631515 start.go:360] acquireMachinesLock for no-preload-899665: {Name:mkc2679ab0df95807a2d573607220fcaad35ba8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.166370  631515 start.go:364] duration metric: took 69.129µs to acquireMachinesLock for "no-preload-899665"
	I1025 10:20:49.166391  631515 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:20:49.166397  631515 fix.go:54] fixHost starting: 
	I1025 10:20:49.166633  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:49.189083  631515 fix.go:112] recreateIfNeeded on no-preload-899665: state=Stopped err=<nil>
	W1025 10:20:49.189137  631515 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:20:45.768193  630019 out.go:252] * Restarting existing docker container for "newest-cni-667966" ...
	I1025 10:20:45.768280  630019 cli_runner.go:164] Run: docker start newest-cni-667966
	I1025 10:20:46.070133  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:46.092660  630019 kic.go:430] container "newest-cni-667966" state is running.
	I1025 10:20:46.093308  630019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-667966
	I1025 10:20:46.115659  630019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/config.json ...
	I1025 10:20:46.115928  630019 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:46.116006  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:46.137989  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:46.138221  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:46.138233  630019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:46.138879  630019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60412->127.0.0.1:33113: read: connection reset by peer
	I1025 10:20:49.299940  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-667966
	
	I1025 10:20:49.299993  630019 ubuntu.go:182] provisioning hostname "newest-cni-667966"
	I1025 10:20:49.300060  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:49.328450  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:49.328866  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:49.328904  630019 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-667966 && echo "newest-cni-667966" | sudo tee /etc/hostname
	I1025 10:20:49.500631  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-667966
	
	I1025 10:20:49.500721  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:49.523293  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:49.523603  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:49.523631  630019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-667966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-667966/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-667966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:49.684677  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:49.684712  630019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:49.684755  630019 ubuntu.go:190] setting up certificates
	I1025 10:20:49.684766  630019 provision.go:84] configureAuth start
	I1025 10:20:49.684823  630019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-667966
	I1025 10:20:49.706309  630019 provision.go:143] copyHostCerts
	I1025 10:20:49.706408  630019 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:49.706430  630019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:49.706492  630019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:49.706664  630019 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:49.706678  630019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:49.706711  630019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:49.706774  630019 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:49.706781  630019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:49.706806  630019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:49.706859  630019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.newest-cni-667966 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-667966]
	I1025 10:20:50.149207  630019 provision.go:177] copyRemoteCerts
	I1025 10:20:50.149272  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:50.149310  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.169121  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:50.275165  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:20:50.296422  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:50.317915  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:50.339177  630019 provision.go:87] duration metric: took 654.39529ms to configureAuth
	I1025 10:20:50.339213  630019 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:50.339482  630019 config.go:182] Loaded profile config "newest-cni-667966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:50.339614  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.360719  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:50.361036  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:50.361057  630019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:50.668171  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:50.668203  630019 machine.go:96] duration metric: took 4.552256919s to provisionDockerMachine
	I1025 10:20:50.668221  630019 start.go:293] postStartSetup for "newest-cni-667966" (driver="docker")
	I1025 10:20:50.668236  630019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:50.668350  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:50.668412  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.692505  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:50.808251  630019 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:20:50.814645  630019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:20:50.814680  630019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:20:50.814694  630019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:20:50.814762  630019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:20:50.814858  630019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:20:50.814990  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:20:50.826107  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:50.852910  630019 start.go:296] duration metric: took 184.668446ms for postStartSetup
	I1025 10:20:50.853012  630019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:20:50.853075  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.880740  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:50.991288  630019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:20:50.998064  630019 fix.go:56] duration metric: took 5.251937458s for fixHost
	I1025 10:20:50.998095  630019 start.go:83] releasing machines lock for "newest-cni-667966", held for 5.251994374s
	I1025 10:20:50.998168  630019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-667966
	I1025 10:20:51.022426  630019 ssh_runner.go:195] Run: cat /version.json
	I1025 10:20:51.022496  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:51.022529  630019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:20:51.022612  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:51.047070  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:51.047960  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:51.231545  630019 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:51.240537  630019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:20:51.292369  630019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:20:51.299044  630019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:20:51.299124  630019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:51.310848  630019 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:20:51.310880  630019 start.go:495] detecting cgroup driver to use...
	I1025 10:20:51.310918  630019 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:20:51.310977  630019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:51.332197  630019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:51.349959  630019 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:51.350023  630019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:51.371166  630019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:51.389953  630019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:51.505076  630019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:51.628195  630019 docker.go:234] disabling docker service ...
	I1025 10:20:51.628285  630019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:51.649484  630019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:51.667012  630019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:51.783861  630019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:51.894561  630019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:51.912476  630019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:51.932229  630019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:20:51.932291  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.945796  630019 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:51.945885  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.957107  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.969962  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.982748  630019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:51.994471  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:52.008638  630019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:52.021198  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:52.034791  630019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:52.045892  630019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:52.057105  630019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:52.176914  630019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:52.803601  630019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:52.803682  630019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:52.809919  630019 start.go:563] Will wait 60s for crictl version
	I1025 10:20:52.809990  630019 ssh_runner.go:195] Run: which crictl
	I1025 10:20:52.815976  630019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:52.849958  630019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:52.850050  630019 ssh_runner.go:195] Run: crio --version
	I1025 10:20:52.891546  630019 ssh_runner.go:195] Run: crio --version
	I1025 10:20:52.938840  630019 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:20:52.940039  630019 cli_runner.go:164] Run: docker network inspect newest-cni-667966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:20:52.964531  630019 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:20:52.970546  630019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:52.988553  630019 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1025 10:20:48.769498  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:20:51.266740  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:20:53.268892  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:20:49.193936  631515 out.go:252] * Restarting existing docker container for "no-preload-899665" ...
	I1025 10:20:49.194048  631515 cli_runner.go:164] Run: docker start no-preload-899665
	I1025 10:20:49.487349  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:49.508735  631515 kic.go:430] container "no-preload-899665" state is running.
	I1025 10:20:49.509182  631515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-899665
	I1025 10:20:49.531885  631515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/config.json ...
	I1025 10:20:49.532161  631515 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:49.532245  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:49.555814  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:49.556042  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:49.556054  631515 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:49.556754  631515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32890->127.0.0.1:33118: read: connection reset by peer
	I1025 10:20:52.718847  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-899665
	
	I1025 10:20:52.718897  631515 ubuntu.go:182] provisioning hostname "no-preload-899665"
	I1025 10:20:52.718985  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:52.744966  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:52.745367  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:52.745389  631515 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-899665 && echo "no-preload-899665" | sudo tee /etc/hostname
	I1025 10:20:52.927925  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-899665
	
	I1025 10:20:52.928103  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:52.956215  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:52.956609  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:52.956647  631515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-899665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-899665/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-899665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:53.123099  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:53.123133  631515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:53.123160  631515 ubuntu.go:190] setting up certificates
	I1025 10:20:53.123173  631515 provision.go:84] configureAuth start
	I1025 10:20:53.123235  631515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-899665
	I1025 10:20:53.147069  631515 provision.go:143] copyHostCerts
	I1025 10:20:53.147134  631515 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:53.147144  631515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:53.147207  631515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:53.147332  631515 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:53.147348  631515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:53.147403  631515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:53.147488  631515 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:53.147495  631515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:53.147532  631515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:53.147610  631515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.no-preload-899665 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-899665]
	I1025 10:20:53.237709  631515 provision.go:177] copyRemoteCerts
	I1025 10:20:53.237773  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:53.237825  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:53.264567  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:53.384587  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:53.405891  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:20:53.430026  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:53.454888  631515 provision.go:87] duration metric: took 331.700401ms to configureAuth
	I1025 10:20:53.454919  631515 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:53.455132  631515 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:53.455253  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:53.478848  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:53.479160  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:53.479186  631515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:52.989846  630019 kubeadm.go:883] updating cluster {Name:newest-cni-667966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-667966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:20:52.990059  630019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:52.990145  630019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:53.038227  630019 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:53.038257  630019 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:20:53.038339  630019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:53.080258  630019 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:53.080360  630019 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:20:53.080374  630019 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:20:53.080517  630019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-667966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-667966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:20:53.080605  630019 ssh_runner.go:195] Run: crio config
	I1025 10:20:53.153729  630019 cni.go:84] Creating CNI manager for ""
	I1025 10:20:53.153750  630019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:53.153769  630019 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:20:53.153791  630019 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-667966 NodeName:newest-cni-667966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:20:53.153953  630019 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-667966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:20:53.154033  630019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:20:53.165634  630019 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:20:53.165721  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:20:53.178769  630019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:20:53.198208  630019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:20:53.217518  630019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1025 10:20:53.237738  630019 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:20:53.243932  630019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:53.259814  630019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:53.374619  630019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:53.403849  630019 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966 for IP: 192.168.94.2
	I1025 10:20:53.403878  630019 certs.go:195] generating shared ca certs ...
	I1025 10:20:53.403903  630019 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:53.404087  630019 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:20:53.404147  630019 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:20:53.404160  630019 certs.go:257] generating profile certs ...
	I1025 10:20:53.404273  630019 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/client.key
	I1025 10:20:53.404383  630019 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/apiserver.key.e7f90482
	I1025 10:20:53.404439  630019 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/proxy-client.key
	I1025 10:20:53.404605  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:20:53.404655  630019 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:20:53.404670  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:20:53.404704  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:20:53.404737  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:20:53.404769  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:20:53.404826  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:53.405722  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:20:53.430160  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:20:53.454528  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:20:53.481952  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:20:53.515206  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:20:53.541101  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:20:53.563105  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:20:53.586068  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:20:53.606595  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:20:53.629140  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:20:53.650970  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:20:53.671738  630019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:20:53.687688  630019 ssh_runner.go:195] Run: openssl version
	I1025 10:20:53.695282  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:20:53.706373  630019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:53.711131  630019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:53.711204  630019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:53.751009  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:20:53.761693  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:20:53.773734  630019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:20:53.778475  630019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:20:53.778547  630019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:20:53.819940  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:20:53.831740  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:20:53.869591  630019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:20:53.874718  630019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:20:53.874790  630019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:20:53.911841  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:20:53.921665  630019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:20:53.926569  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:20:53.966659  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:20:54.004766  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:20:54.041521  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:20:54.097804  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:20:54.153429  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:20:54.214155  630019 kubeadm.go:400] StartCluster: {Name:newest-cni-667966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-667966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:54.214276  630019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:20:54.214350  630019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:20:54.253448  630019 cri.go:89] found id: "dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033"
	I1025 10:20:54.253524  630019 cri.go:89] found id: "9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b"
	I1025 10:20:54.253531  630019 cri.go:89] found id: "043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525"
	I1025 10:20:54.253536  630019 cri.go:89] found id: "d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d"
	I1025 10:20:54.253541  630019 cri.go:89] found id: ""
	I1025 10:20:54.253594  630019 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:20:54.270489  630019 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:20:54.270581  630019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:20:54.281117  630019 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:20:54.281142  630019 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:20:54.281200  630019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:20:54.290456  630019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:20:54.291147  630019 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-667966" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:54.291541  630019 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-667966" cluster setting kubeconfig missing "newest-cni-667966" context setting]
	I1025 10:20:54.292084  630019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:54.294063  630019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:20:54.305052  630019 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 10:20:54.305095  630019 kubeadm.go:601] duration metric: took 23.947598ms to restartPrimaryControlPlane
	I1025 10:20:54.305105  630019 kubeadm.go:402] duration metric: took 90.963924ms to StartCluster
	I1025 10:20:54.305129  630019 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:54.305187  630019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:54.306138  630019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:54.306645  630019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:54.306795  630019 config.go:182] Loaded profile config "newest-cni-667966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:54.306849  630019 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:54.306961  630019 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-667966"
	I1025 10:20:54.306980  630019 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-667966"
	I1025 10:20:54.306976  630019 addons.go:69] Setting dashboard=true in profile "newest-cni-667966"
	I1025 10:20:54.306990  630019 addons.go:69] Setting default-storageclass=true in profile "newest-cni-667966"
	I1025 10:20:54.307009  630019 addons.go:238] Setting addon dashboard=true in "newest-cni-667966"
	I1025 10:20:54.307016  630019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-667966"
	W1025 10:20:54.307029  630019 addons.go:247] addon dashboard should already be in state true
	I1025 10:20:54.307065  630019 host.go:66] Checking if "newest-cni-667966" exists ...
	W1025 10:20:54.306988  630019 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:20:54.307107  630019 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:54.307419  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.307555  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.307641  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.312474  630019 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:54.314110  630019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:54.336995  630019 addons.go:238] Setting addon default-storageclass=true in "newest-cni-667966"
	W1025 10:20:54.337078  630019 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:20:54.337112  630019 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:54.339271  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.343447  630019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:54.343562  630019 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:20:54.345183  630019 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:20:54.345234  630019 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:54.345252  630019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:54.345338  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:54.346403  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:20:54.346424  630019 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:20:54.346483  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:54.370474  630019 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:54.370504  630019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:54.370572  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:54.377427  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:54.385507  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:54.403428  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:54.488137  630019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:54.511859  630019 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:54.511938  630019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:54.521737  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:20:54.521770  630019 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:20:54.552927  630019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:54.554409  630019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:54.556710  630019 api_server.go:72] duration metric: took 250.019827ms to wait for apiserver process to appear ...
	I1025 10:20:54.556741  630019 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:54.556763  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:54.584891  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:20:54.584950  630019 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:20:54.607905  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:20:54.607933  630019 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:20:54.626865  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:20:54.626892  630019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:20:54.648265  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:20:54.648291  630019 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:20:54.667061  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:20:54.667089  630019 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:20:54.682638  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:20:54.682672  630019 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:20:54.697727  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:20:54.697752  630019 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:20:54.712641  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:54.712667  630019 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:20:54.728065  630019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:54.127479  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:54.127511  631515 machine.go:96] duration metric: took 4.595330684s to provisionDockerMachine
	I1025 10:20:54.127525  631515 start.go:293] postStartSetup for "no-preload-899665" (driver="docker")
	I1025 10:20:54.127538  631515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:54.127611  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:54.127657  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.153690  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.268471  631515 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:20:54.272723  631515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:20:54.272758  631515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:20:54.272773  631515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:20:54.272833  631515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:20:54.272931  631515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:20:54.273058  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:20:54.281767  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:54.310788  631515 start.go:296] duration metric: took 183.246947ms for postStartSetup
	I1025 10:20:54.311124  631515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:20:54.311213  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.341511  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.461574  631515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:20:54.467577  631515 fix.go:56] duration metric: took 5.301173489s for fixHost
	I1025 10:20:54.467605  631515 start.go:83] releasing machines lock for "no-preload-899665", held for 5.301219101s
	I1025 10:20:54.467683  631515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-899665
	I1025 10:20:54.490006  631515 ssh_runner.go:195] Run: cat /version.json
	I1025 10:20:54.490072  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.490141  631515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:20:54.490232  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.520545  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.521981  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.733828  631515 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:54.742641  631515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:20:54.792903  631515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:20:54.798610  631515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:20:54.798691  631515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:54.808376  631515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:20:54.808416  631515 start.go:495] detecting cgroup driver to use...
	I1025 10:20:54.808458  631515 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:20:54.808516  631515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:54.825112  631515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:54.840389  631515 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:54.840461  631515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:54.857870  631515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:54.873674  631515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:54.974894  631515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:55.067020  631515 docker.go:234] disabling docker service ...
	I1025 10:20:55.067099  631515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:55.083790  631515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:55.098241  631515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:55.205969  631515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:55.316573  631515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:55.332941  631515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:55.354826  631515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:20:55.354909  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.366776  631515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:55.366853  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.380553  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.392919  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.404830  631515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:55.416172  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.429033  631515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.441980  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.453709  631515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:55.463294  631515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:55.475014  631515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:55.590156  631515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:55.724697  631515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:55.724771  631515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:55.729391  631515 start.go:563] Will wait 60s for crictl version
	I1025 10:20:55.729451  631515 ssh_runner.go:195] Run: which crictl
	I1025 10:20:55.733598  631515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:55.765927  631515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:55.766012  631515 ssh_runner.go:195] Run: crio --version
	I1025 10:20:55.799220  631515 ssh_runner.go:195] Run: crio --version
	I1025 10:20:55.837022  631515 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:20:56.243420  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:20:56.243529  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:20:56.243564  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:56.273312  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:20:56.273361  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:20:56.392447  630019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.839475838s)
	I1025 10:20:56.557287  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:56.562866  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:56.562900  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:56.946525  630019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.392017509s)
	I1025 10:20:56.946894  630019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.218766709s)
	I1025 10:20:56.951372  630019 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-667966 addons enable metrics-server
	
	I1025 10:20:56.953992  630019 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1025 10:20:55.838326  631515 cli_runner.go:164] Run: docker network inspect no-preload-899665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:20:55.866402  631515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:20:55.871215  631515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:55.883474  631515 kubeadm.go:883] updating cluster {Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:20:55.883647  631515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:55.883698  631515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:55.918530  631515 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:55.918555  631515 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:20:55.918564  631515 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:20:55.918692  631515 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-899665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:20:55.918774  631515 ssh_runner.go:195] Run: crio config
	I1025 10:20:55.987775  631515 cni.go:84] Creating CNI manager for ""
	I1025 10:20:55.987800  631515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:55.987835  631515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:20:55.987866  631515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-899665 NodeName:no-preload-899665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:20:55.988045  631515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-899665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:20:55.988168  631515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:20:56.002469  631515 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:20:56.002547  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:20:56.012923  631515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:20:56.028715  631515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:20:56.044472  631515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 10:20:56.060245  631515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:20:56.064876  631515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:56.077354  631515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:56.208720  631515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:56.238889  631515 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665 for IP: 192.168.76.2
	I1025 10:20:56.238916  631515 certs.go:195] generating shared ca certs ...
	I1025 10:20:56.238936  631515 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:56.239091  631515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:20:56.239135  631515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:20:56.239144  631515 certs.go:257] generating profile certs ...
	I1025 10:20:56.239269  631515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/client.key
	I1025 10:20:56.239354  631515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/apiserver.key.3b890db5
	I1025 10:20:56.239404  631515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/proxy-client.key
	I1025 10:20:56.239554  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:20:56.239589  631515 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:20:56.239600  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:20:56.239628  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:20:56.239654  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:20:56.239680  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:20:56.239738  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:56.240543  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:20:56.303265  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:20:56.337533  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:20:56.370571  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:20:56.414944  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:20:56.442232  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:20:56.465933  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:20:56.492999  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:20:56.520005  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:20:56.548457  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:20:56.576657  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:20:56.603351  631515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:20:56.620679  631515 ssh_runner.go:195] Run: openssl version
	I1025 10:20:56.628144  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:20:56.640781  631515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:56.645951  631515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:56.646023  631515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:56.696572  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:20:56.708539  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:20:56.721638  631515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:20:56.727654  631515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:20:56.727720  631515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:20:56.787845  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:20:56.802257  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:20:56.815936  631515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:20:56.822761  631515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:20:56.822840  631515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:20:56.878515  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:20:56.892169  631515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:20:56.897844  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:20:56.960356  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:20:57.021535  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:20:57.073286  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:20:57.121738  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:20:57.173939  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:20:57.216691  631515 kubeadm.go:400] StartCluster: {Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:57.216805  631515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:20:57.216883  631515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:20:57.255924  631515 cri.go:89] found id: "5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964"
	I1025 10:20:57.255965  631515 cri.go:89] found id: "352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f"
	I1025 10:20:57.255971  631515 cri.go:89] found id: "b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a"
	I1025 10:20:57.255976  631515 cri.go:89] found id: "f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793"
	I1025 10:20:57.255979  631515 cri.go:89] found id: ""
	I1025 10:20:57.256031  631515 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:20:57.274862  631515 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:57Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:20:57.274937  631515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:20:57.287218  631515 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:20:57.287249  631515 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:20:57.287310  631515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:20:57.299539  631515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:20:57.300269  631515 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-899665" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:57.300948  631515 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-899665" cluster setting kubeconfig missing "no-preload-899665" context setting]
	I1025 10:20:57.301940  631515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:57.303983  631515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:20:57.317622  631515 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:20:57.317674  631515 kubeadm.go:601] duration metric: took 30.418229ms to restartPrimaryControlPlane
	I1025 10:20:57.317690  631515 kubeadm.go:402] duration metric: took 101.010179ms to StartCluster
	I1025 10:20:57.317714  631515 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:57.317790  631515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:57.319898  631515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:57.320234  631515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:57.320527  631515 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:57.320635  631515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:57.320943  631515 addons.go:69] Setting dashboard=true in profile "no-preload-899665"
	I1025 10:20:57.320972  631515 addons.go:238] Setting addon dashboard=true in "no-preload-899665"
	W1025 10:20:57.320981  631515 addons.go:247] addon dashboard should already be in state true
	I1025 10:20:57.321132  631515 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:20:57.321068  631515 addons.go:69] Setting storage-provisioner=true in profile "no-preload-899665"
	I1025 10:20:57.321179  631515 addons.go:238] Setting addon storage-provisioner=true in "no-preload-899665"
	W1025 10:20:57.321193  631515 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:20:57.321235  631515 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:20:57.321062  631515 addons.go:69] Setting default-storageclass=true in profile "no-preload-899665"
	I1025 10:20:57.321352  631515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-899665"
	I1025 10:20:57.321680  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.321789  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.321805  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.326294  631515 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:57.347789  631515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:57.358399  631515 addons.go:238] Setting addon default-storageclass=true in "no-preload-899665"
	W1025 10:20:57.358485  631515 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:20:57.358533  631515 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:20:57.358719  631515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:57.359240  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.360118  631515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:57.360276  631515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:57.360243  631515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:20:57.360411  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:57.362921  631515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:20:56.954911  630019 addons.go:514] duration metric: took 2.648067112s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1025 10:20:57.057882  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:57.064121  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:57.064153  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:57.557503  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:57.564175  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:57.564208  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:58.057498  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:58.062542  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:20:58.064041  630019 api_server.go:141] control plane version: v1.34.1
	I1025 10:20:58.064072  630019 api_server.go:131] duration metric: took 3.507323093s to wait for apiserver health ...
	I1025 10:20:58.064084  630019 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:20:58.068404  630019 system_pods.go:59] 8 kube-system pods found
	I1025 10:20:58.068447  630019 system_pods.go:61] "coredns-66bc5c9577-r94h4" [2115a28b-31dc-4c2c-92cc-673a27e36bbf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:58.068459  630019 system_pods.go:61] "etcd-newest-cni-667966" [11d44ba6-f334-4879-aa97-64a7a7607270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:20:58.068467  630019 system_pods.go:61] "kindnet-srprb" [02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb] Running
	I1025 10:20:58.068476  630019 system_pods.go:61] "kube-apiserver-newest-cni-667966" [5cec7e59-41bf-413f-a61f-f10bb6663011] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:20:58.068485  630019 system_pods.go:61] "kube-controller-manager-newest-cni-667966" [ff16c3cb-b8d1-4823-a897-47d3d0e58335] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:20:58.068495  630019 system_pods.go:61] "kube-proxy-vngwv" [273b5cf5-0600-4009-bab3-06b3a900b02d] Running
	I1025 10:20:58.068500  630019 system_pods.go:61] "kube-scheduler-newest-cni-667966" [9aac2144-6942-4b66-9a48-0defb4aba756] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:20:58.068505  630019 system_pods.go:61] "storage-provisioner" [bd681a48-b157-41ff-b49f-5189827996b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:58.068513  630019 system_pods.go:74] duration metric: took 4.421663ms to wait for pod list to return data ...
	I1025 10:20:58.068527  630019 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:20:58.071278  630019 default_sa.go:45] found service account: "default"
	I1025 10:20:58.071305  630019 default_sa.go:55] duration metric: took 2.770038ms for default service account to be created ...
	I1025 10:20:58.071351  630019 kubeadm.go:586] duration metric: took 3.764635819s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:20:58.071377  630019 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:20:58.074474  630019 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:20:58.074513  630019 node_conditions.go:123] node cpu capacity is 8
	I1025 10:20:58.074532  630019 node_conditions.go:105] duration metric: took 3.14888ms to run NodePressure ...
	I1025 10:20:58.074548  630019 start.go:241] waiting for startup goroutines ...
	I1025 10:20:58.074557  630019 start.go:246] waiting for cluster config update ...
	I1025 10:20:58.074569  630019 start.go:255] writing updated cluster config ...
	I1025 10:20:58.074982  630019 ssh_runner.go:195] Run: rm -f paused
	I1025 10:20:58.140856  630019 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:20:58.143063  630019 out.go:179] * Done! kubectl is now configured to use "newest-cni-667966" cluster and "default" namespace by default
	W1025 10:20:55.767613  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:20:58.267561  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:20:57.364197  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:20:57.364217  631515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:20:57.364282  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:57.396014  631515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:57.396229  631515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:57.396633  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:57.396760  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:57.398614  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:57.431274  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:57.534051  631515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:57.554695  631515 node_ready.go:35] waiting up to 6m0s for node "no-preload-899665" to be "Ready" ...
	I1025 10:20:57.559887  631515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:57.581786  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:20:57.581819  631515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:20:57.582389  631515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:57.606873  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:20:57.606901  631515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:20:57.630145  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:20:57.630271  631515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:20:57.652887  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:20:57.652912  631515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:20:57.673346  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:20:57.673378  631515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:20:57.688695  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:20:57.688722  631515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:20:57.703680  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:20:57.703711  631515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:20:57.718436  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:20:57.718462  631515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:20:57.734340  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:57.734407  631515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:20:57.750529  631515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:59.122649  631515 node_ready.go:49] node "no-preload-899665" is "Ready"
	I1025 10:20:59.122746  631515 node_ready.go:38] duration metric: took 1.568013142s for node "no-preload-899665" to be "Ready" ...
	I1025 10:20:59.122770  631515 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:59.122852  631515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:59.779615  631515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.197192759s)
	I1025 10:20:59.779682  631515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.219744315s)
	I1025 10:20:59.779804  631515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.029238146s)
	I1025 10:20:59.779844  631515 api_server.go:72] duration metric: took 2.459570122s to wait for apiserver process to appear ...
	I1025 10:20:59.780303  631515 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:59.780343  631515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:20:59.781714  631515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-899665 addons enable metrics-server
	
	I1025 10:20:59.786218  631515 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:59.786243  631515 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:59.791065  631515 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.796730113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.800654063Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cfd2d95e-fc7f-42b0-87ee-50cb0527469b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.801603126Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd7d8b0e-4637-4ec4-8021-29e88d46dbe6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.802971733Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.80351566Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.804030261Z" level=info msg="Ran pod sandbox 12c58a1bf5193964d2d7ccaffd71f203fe55cd1693de5312555216d90fb8a0be with infra container: kube-system/kube-proxy-vngwv/POD" id=cfd2d95e-fc7f-42b0-87ee-50cb0527469b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.804527544Z" level=info msg="Ran pod sandbox 72b11ed48bdf1f74a55c55568fed114aa4b3d7bedbc25067adc04ab97c3a4dcc with infra container: kube-system/kindnet-srprb/POD" id=cd7d8b0e-4637-4ec4-8021-29e88d46dbe6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.805925096Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ef71eb44-813f-4ce2-ae66-7c039aaf0769 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.80596876Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c786d602-591c-407d-9e3a-f77e8cbca9d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.807123005Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f7931f9d-b2cc-45a1-8b2b-a0c301097336 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.80717605Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=936e64b0-3d4b-46a4-9c7c-2763d1ecaf7a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.808408741Z" level=info msg="Creating container: kube-system/kube-proxy-vngwv/kube-proxy" id=611ed702-6ce9-420f-a05b-ff518a3b12f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.808553008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.809122778Z" level=info msg="Creating container: kube-system/kindnet-srprb/kindnet-cni" id=a219a0c6-b63b-4df4-bc2e-1b6a35d15529 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.809230771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.818828952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.819439568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.819535566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.819924837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.862883937Z" level=info msg="Created container 3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a: kube-system/kindnet-srprb/kindnet-cni" id=a219a0c6-b63b-4df4-bc2e-1b6a35d15529 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.86394886Z" level=info msg="Starting container: 3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a" id=c460c58d-b011-4ac0-aea9-29160fd98215 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.866557192Z" level=info msg="Started container" PID=1033 containerID=3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a description=kube-system/kindnet-srprb/kindnet-cni id=c460c58d-b011-4ac0-aea9-29160fd98215 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72b11ed48bdf1f74a55c55568fed114aa4b3d7bedbc25067adc04ab97c3a4dcc
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.870833231Z" level=info msg="Created container b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f: kube-system/kube-proxy-vngwv/kube-proxy" id=611ed702-6ce9-420f-a05b-ff518a3b12f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.871782707Z" level=info msg="Starting container: b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f" id=53bd2088-9119-421f-90b6-f662c765b5fc name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.875881355Z" level=info msg="Started container" PID=1034 containerID=b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f description=kube-system/kube-proxy-vngwv/kube-proxy id=53bd2088-9119-421f-90b6-f662c765b5fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=12c58a1bf5193964d2d7ccaffd71f203fe55cd1693de5312555216d90fb8a0be
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3c68bd23f6660       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   72b11ed48bdf1       kindnet-srprb                               kube-system
	b05ffe134a05e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   12c58a1bf5193       kube-proxy-vngwv                            kube-system
	dc5e1fe15e732       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   40bc2a940a2ec       kube-scheduler-newest-cni-667966            kube-system
	9f8c1df6dfdf4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   5cb89e1d2c833       etcd-newest-cni-667966                      kube-system
	043d021586bed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   41640438055b1       kube-apiserver-newest-cni-667966            kube-system
	d1f99cc829179       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   989e151ffd5ee       kube-controller-manager-newest-cni-667966   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-667966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-667966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=newest-cni-667966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-667966
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-667966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                276bfa54-9db8-48b4-86d5-3278d4455526
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-667966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-srprb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-667966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-667966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-vngwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-667966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 5s    kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node newest-cni-667966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node newest-cni-667966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node newest-cni-667966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node newest-cni-667966 event: Registered Node newest-cni-667966 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-667966 event: Registered Node newest-cni-667966 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b] <==
	{"level":"warn","ts":"2025-10-25T10:20:55.489134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.509154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.528252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.535916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.544130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.552608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.560037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.568475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.576331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.587621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.594872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.602268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.610562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.617603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.625252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.633274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.640438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.648932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.657708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.665381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.673441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.693581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.700665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.707769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.769398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33716","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:02 up  2:03,  0 user,  load average: 5.90, 4.99, 5.96
	Linux newest-cni-667966 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a] <==
	I1025 10:20:57.141110       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:57.142750       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 10:20:57.142912       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:57.142928       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:57.142955       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:20:57.539273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:57.540749       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:57.540777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1025 10:20:57.539976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:20:57.540005       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:20:57.540656       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:20:57.541162       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:20:58.841707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:20:58.841758       1 metrics.go:72] Registering metrics
	I1025 10:20:58.841837       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525] <==
	I1025 10:20:56.337307       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:56.337314       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:56.337608       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 10:20:56.340013       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:20:56.346626       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:20:56.346804       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:20:56.349925       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:20:56.363422       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:20:56.371437       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:20:56.371537       1 policy_source.go:240] refreshing policies
	I1025 10:20:56.383458       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:56.387804       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:56.582240       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:56.645754       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:20:56.679497       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:56.704061       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:56.713358       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:56.772009       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.143"}
	I1025 10:20:56.784106       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.240.218"}
	I1025 10:20:57.242668       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:59.834841       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:00.185540       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:00.185540       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:00.235050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:21:00.235050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d] <==
	I1025 10:20:59.682119       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:20:59.682199       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:20:59.682508       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:20:59.683771       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:20:59.685566       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:20:59.688485       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:20:59.688589       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:20:59.688715       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-667966"
	I1025 10:20:59.688719       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:20:59.688769       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:20:59.688833       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:59.688837       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:20:59.688889       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:20:59.688896       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:20:59.688904       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:20:59.691119       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:20:59.692391       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:20:59.694989       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:20:59.695130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:20:59.697698       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:20:59.700113       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:20:59.704509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:59.704534       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:20:59.704546       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:20:59.710276       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f] <==
	I1025 10:20:56.940947       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:57.019771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:57.120584       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:57.120627       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 10:20:57.120771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:20:57.169050       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:57.169183       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:20:57.174988       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:20:57.175532       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:20:57.175803       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:57.179736       1 config.go:309] "Starting node config controller"
	I1025 10:20:57.182436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:20:57.182478       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:20:57.181787       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:20:57.182555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:20:57.180254       1 config.go:200] "Starting service config controller"
	I1025 10:20:57.182661       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:20:57.180196       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:20:57.182735       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:20:57.283452       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:20:57.283521       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:20:57.283420       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033] <==
	I1025 10:20:55.709585       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:20:57.100909       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:20:57.100938       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:57.106061       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:20:57.106235       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:20:57.106147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:20:57.106422       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:20:57.106115       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:57.106505       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:57.106613       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:20:57.106635       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:20:57.207282       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:57.207444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:20:57.207485       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 25 10:20:55 newest-cni-667966 kubelet[656]: E1025 10:20:55.567950     656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-667966\" not found" node="newest-cni-667966"
	Oct 25 10:20:55 newest-cni-667966 kubelet[656]: E1025 10:20:55.568243     656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-667966\" not found" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.150968     656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-667966\" not found" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.291425     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.417004     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-667966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.417227     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.425545     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-667966\" already exists" pod="kube-system/kube-scheduler-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.425598     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.435826     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-667966\" already exists" pod="kube-system/etcd-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.435871     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.446523     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-667966\" already exists" pod="kube-system/kube-apiserver-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.482849     656 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.482966     656 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.483015     656 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.484038     656 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.487014     656 apiserver.go:52] "Watching apiserver"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.492820     656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573264     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-xtables-lock\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573367     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/273b5cf5-0600-4009-bab3-06b3a900b02d-lib-modules\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573401     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-cni-cfg\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573424     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-lib-modules\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573480     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/273b5cf5-0600-4009-bab3-06b3a900b02d-xtables-lock\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:59 newest-cni-667966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:20:59 newest-cni-667966 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:20:59 newest-cni-667966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-667966 -n newest-cni-667966
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-667966 -n newest-cni-667966: exit status 2 (356.194168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-667966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv: exit status 1 (67.978292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-r94h4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-6q4tv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nlbwv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-667966
helpers_test.go:243: (dbg) docker inspect newest-cni-667966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d",
	        "Created": "2025-10-25T10:20:12.207812957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 630323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:45.798949325Z",
	            "FinishedAt": "2025-10-25T10:20:44.797092589Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/hostname",
	        "HostsPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/hosts",
	        "LogPath": "/var/lib/docker/containers/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d/cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d-json.log",
	        "Name": "/newest-cni-667966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-667966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-667966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cede76718eb297dbc08c6a92f84a8a33664f7c9525ae93a52761622e2228f38d",
	                "LowerDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ced9eee064c8b62082c8ab15ce64e3d3efdb1a398a85d422f795367ad25ee78d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-667966",
	                "Source": "/var/lib/docker/volumes/newest-cni-667966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-667966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-667966",
	                "name.minikube.sigs.k8s.io": "newest-cni-667966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1aeb2edecbe5406f33625a7190e1ceef6a9cb28571a0ad5934c745b67e9ec417",
	            "SandboxKey": "/var/run/docker/netns/1aeb2edecbe5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-667966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:10:a2:cd:8a:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1607edd0e575c882979f9db63a22ad5ee1f0aabcbcf3a5dc021515221638bbcb",
	                    "EndpointID": "99df72bc9a43b07acd79ccfb6eb6d94b7e3e92a2c005004e8df61d3fe5d19e7e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-667966",
	                        "cede76718eb2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966: exit status 2 (380.284541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-667966 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-667966 logs -n 25: (1.466116333s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-119085 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo containerd config dump                                                                                                                                                                                                 │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ ssh     │ -p flannel-119085 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ ssh     │ -p flannel-119085 sudo crio config                                                                                                                                                                                                            │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:20:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:20:48.892241  631515 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:20:48.892653  631515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:48.892669  631515 out.go:374] Setting ErrFile to fd 2...
	I1025 10:20:48.892676  631515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:20:48.893047  631515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:20:48.893975  631515 out.go:368] Setting JSON to false
	I1025 10:20:48.895918  631515 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7398,"bootTime":1761380251,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:20:48.896133  631515 start.go:141] virtualization: kvm guest
	I1025 10:20:48.899513  631515 out.go:179] * [no-preload-899665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:20:48.901568  631515 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:20:48.901592  631515 notify.go:220] Checking for updates...
	I1025 10:20:48.905055  631515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:20:48.907313  631515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:48.909465  631515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:20:48.910986  631515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:20:48.912379  631515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:20:48.914291  631515 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:48.914976  631515 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:20:48.949962  631515 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:20:48.950082  631515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:49.042118  631515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 10:20:49.02747325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:49.042250  631515 docker.go:318] overlay module found
	I1025 10:20:49.045154  631515 out.go:179] * Using the docker driver based on existing profile
	I1025 10:20:49.046717  631515 start.go:305] selected driver: docker
	I1025 10:20:49.046739  631515 start.go:925] validating driver "docker" against &{Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:49.046879  631515 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:20:49.047724  631515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:20:49.128358  631515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 10:20:49.114483022 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:20:49.128697  631515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:20:49.128747  631515 cni.go:84] Creating CNI manager for ""
	I1025 10:20:49.128791  631515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:49.128835  631515 start.go:349] cluster config:
	{Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:49.132098  631515 out.go:179] * Starting "no-preload-899665" primary control-plane node in "no-preload-899665" cluster
	I1025 10:20:49.133686  631515 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:20:49.135734  631515 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:20:49.137151  631515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:49.137270  631515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:20:49.137291  631515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/config.json ...
	I1025 10:20:49.137591  631515 cache.go:107] acquiring lock: {Name:mk40b6df814b6b5925975339c490eaa473a6de34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137678  631515 cache.go:107] acquiring lock: {Name:mk598afb8705e91839dae1d4a2c6bc154c20ab42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137631  631515 cache.go:107] acquiring lock: {Name:mkca7e8f698c00a2dded053258d11cb559d4a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137602  631515 cache.go:107] acquiring lock: {Name:mk87b9b51f951a49c1140ff827e752119366fce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137763  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:20:49.137767  631515 cache.go:107] acquiring lock: {Name:mk5e595f9203d1fc28a17a4a355a91fb1aaa2600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137783  631515 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.085µs
	I1025 10:20:49.137794  631515 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:20:49.137799  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:20:49.137814  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:20:49.137817  631515 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 156.703µs
	I1025 10:20:49.137826  631515 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 238.187µs
	I1025 10:20:49.137822  631515 cache.go:107] acquiring lock: {Name:mkc1b890852e9d05ce9fc035ad71487b8b862e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137836  631515 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:20:49.137834  631515 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:20:49.137821  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:20:49.137848  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:20:49.137853  631515 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 267.096µs
	I1025 10:20:49.137876  631515 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:20:49.137878  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:20:49.137890  631515 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 75.695µs
	I1025 10:20:49.137859  631515 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 94.988µs
	I1025 10:20:49.137602  631515 cache.go:107] acquiring lock: {Name:mk66fb8d1501241cf6467abb2c486b29aeb41ec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137911  631515 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:20:49.137901  631515 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:20:49.137698  631515 cache.go:107] acquiring lock: {Name:mka703b719c5bb116e1b09495d013db0ad942e12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.137982  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:20:49.137997  631515 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 413.277µs
	I1025 10:20:49.138006  631515 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:20:49.138026  631515 cache.go:115] /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:20:49.138040  631515 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 354.693µs
	I1025 10:20:49.138055  631515 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:20:49.138065  631515 cache.go:87] Successfully saved all images to host disk.
	I1025 10:20:49.166217  631515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:20:49.166239  631515 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:20:49.166258  631515 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:20:49.166284  631515 start.go:360] acquireMachinesLock for no-preload-899665: {Name:mkc2679ab0df95807a2d573607220fcaad35ba8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:20:49.166370  631515 start.go:364] duration metric: took 69.129µs to acquireMachinesLock for "no-preload-899665"
	I1025 10:20:49.166391  631515 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:20:49.166397  631515 fix.go:54] fixHost starting: 
	I1025 10:20:49.166633  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:49.189083  631515 fix.go:112] recreateIfNeeded on no-preload-899665: state=Stopped err=<nil>
	W1025 10:20:49.189137  631515 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:20:45.768193  630019 out.go:252] * Restarting existing docker container for "newest-cni-667966" ...
	I1025 10:20:45.768280  630019 cli_runner.go:164] Run: docker start newest-cni-667966
	I1025 10:20:46.070133  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:46.092660  630019 kic.go:430] container "newest-cni-667966" state is running.
	I1025 10:20:46.093308  630019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-667966
	I1025 10:20:46.115659  630019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/config.json ...
	I1025 10:20:46.115928  630019 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:46.116006  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:46.137989  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:46.138221  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:46.138233  630019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:46.138879  630019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60412->127.0.0.1:33113: read: connection reset by peer
	I1025 10:20:49.299940  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-667966
	
	I1025 10:20:49.299993  630019 ubuntu.go:182] provisioning hostname "newest-cni-667966"
	I1025 10:20:49.300060  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:49.328450  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:49.328866  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:49.328904  630019 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-667966 && echo "newest-cni-667966" | sudo tee /etc/hostname
	I1025 10:20:49.500631  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-667966
	
	I1025 10:20:49.500721  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:49.523293  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:49.523603  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:49.523631  630019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-667966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-667966/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-667966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:49.684677  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:49.684712  630019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:49.684755  630019 ubuntu.go:190] setting up certificates
	I1025 10:20:49.684766  630019 provision.go:84] configureAuth start
	I1025 10:20:49.684823  630019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-667966
	I1025 10:20:49.706309  630019 provision.go:143] copyHostCerts
	I1025 10:20:49.706408  630019 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:49.706430  630019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:49.706492  630019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:49.706664  630019 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:49.706678  630019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:49.706711  630019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:49.706774  630019 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:49.706781  630019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:49.706806  630019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:49.706859  630019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.newest-cni-667966 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-667966]
	I1025 10:20:50.149207  630019 provision.go:177] copyRemoteCerts
	I1025 10:20:50.149272  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:50.149310  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.169121  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:50.275165  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:20:50.296422  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:50.317915  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:50.339177  630019 provision.go:87] duration metric: took 654.39529ms to configureAuth
	I1025 10:20:50.339213  630019 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:50.339482  630019 config.go:182] Loaded profile config "newest-cni-667966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:50.339614  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.360719  630019 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:50.361036  630019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 10:20:50.361057  630019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:50.668171  630019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:50.668203  630019 machine.go:96] duration metric: took 4.552256919s to provisionDockerMachine
	I1025 10:20:50.668221  630019 start.go:293] postStartSetup for "newest-cni-667966" (driver="docker")
	I1025 10:20:50.668236  630019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:50.668350  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:50.668412  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.692505  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:50.808251  630019 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:20:50.814645  630019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:20:50.814680  630019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:20:50.814694  630019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:20:50.814762  630019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:20:50.814858  630019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:20:50.814990  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:20:50.826107  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:50.852910  630019 start.go:296] duration metric: took 184.668446ms for postStartSetup
	I1025 10:20:50.853012  630019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:20:50.853075  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:50.880740  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:50.991288  630019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:20:50.998064  630019 fix.go:56] duration metric: took 5.251937458s for fixHost
	I1025 10:20:50.998095  630019 start.go:83] releasing machines lock for "newest-cni-667966", held for 5.251994374s
	I1025 10:20:50.998168  630019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-667966
	I1025 10:20:51.022426  630019 ssh_runner.go:195] Run: cat /version.json
	I1025 10:20:51.022496  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:51.022529  630019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:20:51.022612  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:51.047070  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:51.047960  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:51.231545  630019 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:51.240537  630019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:20:51.292369  630019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:20:51.299044  630019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:20:51.299124  630019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:51.310848  630019 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:20:51.310880  630019 start.go:495] detecting cgroup driver to use...
	I1025 10:20:51.310918  630019 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:20:51.310977  630019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:51.332197  630019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:51.349959  630019 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:51.350023  630019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:51.371166  630019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:51.389953  630019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:51.505076  630019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:51.628195  630019 docker.go:234] disabling docker service ...
	I1025 10:20:51.628285  630019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:51.649484  630019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:51.667012  630019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:51.783861  630019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:51.894561  630019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:51.912476  630019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:51.932229  630019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:20:51.932291  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.945796  630019 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:51.945885  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.957107  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.969962  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:51.982748  630019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:51.994471  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:52.008638  630019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:52.021198  630019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:52.034791  630019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:52.045892  630019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:52.057105  630019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:52.176914  630019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:52.803601  630019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:52.803682  630019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:52.809919  630019 start.go:563] Will wait 60s for crictl version
	I1025 10:20:52.809990  630019 ssh_runner.go:195] Run: which crictl
	I1025 10:20:52.815976  630019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:52.849958  630019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:52.850050  630019 ssh_runner.go:195] Run: crio --version
	I1025 10:20:52.891546  630019 ssh_runner.go:195] Run: crio --version
	I1025 10:20:52.938840  630019 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:20:52.940039  630019 cli_runner.go:164] Run: docker network inspect newest-cni-667966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:20:52.964531  630019 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:20:52.970546  630019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:52.988553  630019 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1025 10:20:48.769498  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:20:51.266740  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:20:53.268892  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:20:49.193936  631515 out.go:252] * Restarting existing docker container for "no-preload-899665" ...
	I1025 10:20:49.194048  631515 cli_runner.go:164] Run: docker start no-preload-899665
	I1025 10:20:49.487349  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:49.508735  631515 kic.go:430] container "no-preload-899665" state is running.
	I1025 10:20:49.509182  631515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-899665
	I1025 10:20:49.531885  631515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/config.json ...
	I1025 10:20:49.532161  631515 machine.go:93] provisionDockerMachine start ...
	I1025 10:20:49.532245  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:49.555814  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:49.556042  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:49.556054  631515 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:20:49.556754  631515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32890->127.0.0.1:33118: read: connection reset by peer
	I1025 10:20:52.718847  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-899665
	
	I1025 10:20:52.718897  631515 ubuntu.go:182] provisioning hostname "no-preload-899665"
	I1025 10:20:52.718985  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:52.744966  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:52.745367  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:52.745389  631515 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-899665 && echo "no-preload-899665" | sudo tee /etc/hostname
	I1025 10:20:52.927925  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-899665
	
	I1025 10:20:52.928103  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:52.956215  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:52.956609  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:52.956647  631515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-899665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-899665/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-899665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:20:53.123099  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:20:53.123133  631515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:20:53.123160  631515 ubuntu.go:190] setting up certificates
	I1025 10:20:53.123173  631515 provision.go:84] configureAuth start
	I1025 10:20:53.123235  631515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-899665
	I1025 10:20:53.147069  631515 provision.go:143] copyHostCerts
	I1025 10:20:53.147134  631515 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:20:53.147144  631515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:20:53.147207  631515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:20:53.147332  631515 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:20:53.147348  631515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:20:53.147403  631515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:20:53.147488  631515 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:20:53.147495  631515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:20:53.147532  631515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:20:53.147610  631515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.no-preload-899665 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-899665]
	I1025 10:20:53.237709  631515 provision.go:177] copyRemoteCerts
	I1025 10:20:53.237773  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:20:53.237825  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:53.264567  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:53.384587  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:20:53.405891  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:20:53.430026  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:20:53.454888  631515 provision.go:87] duration metric: took 331.700401ms to configureAuth
	I1025 10:20:53.454919  631515 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:20:53.455132  631515 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:53.455253  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:53.478848  631515 main.go:141] libmachine: Using SSH client type: native
	I1025 10:20:53.479160  631515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 10:20:53.479186  631515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:20:52.989846  630019 kubeadm.go:883] updating cluster {Name:newest-cni-667966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-667966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:20:52.990059  630019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:52.990145  630019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:53.038227  630019 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:53.038257  630019 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:20:53.038339  630019 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:53.080258  630019 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:53.080360  630019 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:20:53.080374  630019 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:20:53.080517  630019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-667966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-667966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:20:53.080605  630019 ssh_runner.go:195] Run: crio config
	I1025 10:20:53.153729  630019 cni.go:84] Creating CNI manager for ""
	I1025 10:20:53.153750  630019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:53.153769  630019 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:20:53.153791  630019 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-667966 NodeName:newest-cni-667966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:20:53.153953  630019 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-667966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:20:53.154033  630019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:20:53.165634  630019 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:20:53.165721  630019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:20:53.178769  630019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:20:53.198208  630019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:20:53.217518  630019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1025 10:20:53.237738  630019 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:20:53.243932  630019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:53.259814  630019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:53.374619  630019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:53.403849  630019 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966 for IP: 192.168.94.2
	I1025 10:20:53.403878  630019 certs.go:195] generating shared ca certs ...
	I1025 10:20:53.403903  630019 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:53.404087  630019 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:20:53.404147  630019 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:20:53.404160  630019 certs.go:257] generating profile certs ...
	I1025 10:20:53.404273  630019 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/client.key
	I1025 10:20:53.404383  630019 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/apiserver.key.e7f90482
	I1025 10:20:53.404439  630019 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/proxy-client.key
	I1025 10:20:53.404605  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:20:53.404655  630019 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:20:53.404670  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:20:53.404704  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:20:53.404737  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:20:53.404769  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:20:53.404826  630019 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:53.405722  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:20:53.430160  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:20:53.454528  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:20:53.481952  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:20:53.515206  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:20:53.541101  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:20:53.563105  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:20:53.586068  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/newest-cni-667966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:20:53.606595  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:20:53.629140  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:20:53.650970  630019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:20:53.671738  630019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:20:53.687688  630019 ssh_runner.go:195] Run: openssl version
	I1025 10:20:53.695282  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:20:53.706373  630019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:53.711131  630019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:53.711204  630019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:53.751009  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:20:53.761693  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:20:53.773734  630019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:20:53.778475  630019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:20:53.778547  630019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:20:53.819940  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:20:53.831740  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:20:53.869591  630019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:20:53.874718  630019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:20:53.874790  630019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:20:53.911841  630019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:20:53.921665  630019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:20:53.926569  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:20:53.966659  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:20:54.004766  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:20:54.041521  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:20:54.097804  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:20:54.153429  630019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:20:54.214155  630019 kubeadm.go:400] StartCluster: {Name:newest-cni-667966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-667966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:54.214276  630019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:20:54.214350  630019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:20:54.253448  630019 cri.go:89] found id: "dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033"
	I1025 10:20:54.253524  630019 cri.go:89] found id: "9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b"
	I1025 10:20:54.253531  630019 cri.go:89] found id: "043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525"
	I1025 10:20:54.253536  630019 cri.go:89] found id: "d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d"
	I1025 10:20:54.253541  630019 cri.go:89] found id: ""
	I1025 10:20:54.253594  630019 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:20:54.270489  630019 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:54Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:20:54.270581  630019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:20:54.281117  630019 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:20:54.281142  630019 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:20:54.281200  630019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:20:54.290456  630019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:20:54.291147  630019 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-667966" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:54.291541  630019 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-667966" cluster setting kubeconfig missing "newest-cni-667966" context setting]
	I1025 10:20:54.292084  630019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:54.294063  630019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:20:54.305052  630019 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 10:20:54.305095  630019 kubeadm.go:601] duration metric: took 23.947598ms to restartPrimaryControlPlane
	I1025 10:20:54.305105  630019 kubeadm.go:402] duration metric: took 90.963924ms to StartCluster
	I1025 10:20:54.305129  630019 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:54.305187  630019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:54.306138  630019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:54.306645  630019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:54.306795  630019 config.go:182] Loaded profile config "newest-cni-667966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:54.306849  630019 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:54.306961  630019 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-667966"
	I1025 10:20:54.306980  630019 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-667966"
	I1025 10:20:54.306976  630019 addons.go:69] Setting dashboard=true in profile "newest-cni-667966"
	I1025 10:20:54.306990  630019 addons.go:69] Setting default-storageclass=true in profile "newest-cni-667966"
	I1025 10:20:54.307009  630019 addons.go:238] Setting addon dashboard=true in "newest-cni-667966"
	I1025 10:20:54.307016  630019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-667966"
	W1025 10:20:54.307029  630019 addons.go:247] addon dashboard should already be in state true
	I1025 10:20:54.307065  630019 host.go:66] Checking if "newest-cni-667966" exists ...
	W1025 10:20:54.306988  630019 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:20:54.307107  630019 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:54.307419  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.307555  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.307641  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.312474  630019 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:54.314110  630019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:54.336995  630019 addons.go:238] Setting addon default-storageclass=true in "newest-cni-667966"
	W1025 10:20:54.337078  630019 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:20:54.337112  630019 host.go:66] Checking if "newest-cni-667966" exists ...
	I1025 10:20:54.339271  630019 cli_runner.go:164] Run: docker container inspect newest-cni-667966 --format={{.State.Status}}
	I1025 10:20:54.343447  630019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:54.343562  630019 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:20:54.345183  630019 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:20:54.345234  630019 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:54.345252  630019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:54.345338  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:54.346403  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:20:54.346424  630019 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:20:54.346483  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:54.370474  630019 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:54.370504  630019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:54.370572  630019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-667966
	I1025 10:20:54.377427  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:54.385507  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:54.403428  630019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/newest-cni-667966/id_rsa Username:docker}
	I1025 10:20:54.488137  630019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:54.511859  630019 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:54.511938  630019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:54.521737  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:20:54.521770  630019 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:20:54.552927  630019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:54.554409  630019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:54.556710  630019 api_server.go:72] duration metric: took 250.019827ms to wait for apiserver process to appear ...
	I1025 10:20:54.556741  630019 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:54.556763  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:54.584891  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:20:54.584950  630019 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:20:54.607905  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:20:54.607933  630019 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:20:54.626865  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:20:54.626892  630019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:20:54.648265  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:20:54.648291  630019 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:20:54.667061  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:20:54.667089  630019 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:20:54.682638  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:20:54.682672  630019 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:20:54.697727  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:20:54.697752  630019 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:20:54.712641  630019 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:54.712667  630019 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:20:54.728065  630019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:54.127479  631515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:20:54.127511  631515 machine.go:96] duration metric: took 4.595330684s to provisionDockerMachine
	I1025 10:20:54.127525  631515 start.go:293] postStartSetup for "no-preload-899665" (driver="docker")
	I1025 10:20:54.127538  631515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:20:54.127611  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:20:54.127657  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.153690  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.268471  631515 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:20:54.272723  631515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:20:54.272758  631515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:20:54.272773  631515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:20:54.272833  631515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:20:54.272931  631515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:20:54.273058  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:20:54.281767  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:54.310788  631515 start.go:296] duration metric: took 183.246947ms for postStartSetup
	I1025 10:20:54.311124  631515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:20:54.311213  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.341511  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.461574  631515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:20:54.467577  631515 fix.go:56] duration metric: took 5.301173489s for fixHost
	I1025 10:20:54.467605  631515 start.go:83] releasing machines lock for "no-preload-899665", held for 5.301219101s
	I1025 10:20:54.467683  631515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-899665
	I1025 10:20:54.490006  631515 ssh_runner.go:195] Run: cat /version.json
	I1025 10:20:54.490072  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.490141  631515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:20:54.490232  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:54.520545  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.521981  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:54.733828  631515 ssh_runner.go:195] Run: systemctl --version
	I1025 10:20:54.742641  631515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:20:54.792903  631515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:20:54.798610  631515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:20:54.798691  631515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:54.808376  631515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:20:54.808416  631515 start.go:495] detecting cgroup driver to use...
	I1025 10:20:54.808458  631515 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:20:54.808516  631515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:54.825112  631515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:54.840389  631515 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:54.840461  631515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:54.857870  631515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:54.873674  631515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:54.974894  631515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:55.067020  631515 docker.go:234] disabling docker service ...
	I1025 10:20:55.067099  631515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:55.083790  631515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:55.098241  631515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:55.205969  631515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:55.316573  631515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:55.332941  631515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:55.354826  631515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:20:55.354909  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.366776  631515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:20:55.366853  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.380553  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.392919  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.404830  631515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:55.416172  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.429033  631515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.441980  631515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:55.453709  631515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:55.463294  631515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:55.475014  631515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:55.590156  631515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:55.724697  631515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:55.724771  631515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:55.729391  631515 start.go:563] Will wait 60s for crictl version
	I1025 10:20:55.729451  631515 ssh_runner.go:195] Run: which crictl
	I1025 10:20:55.733598  631515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:55.765927  631515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:55.766012  631515 ssh_runner.go:195] Run: crio --version
	I1025 10:20:55.799220  631515 ssh_runner.go:195] Run: crio --version
	I1025 10:20:55.837022  631515 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:20:56.243420  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:20:56.243529  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:20:56.243564  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:56.273312  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:20:56.273361  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:20:56.392447  630019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.839475838s)
	I1025 10:20:56.557287  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:56.562866  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:56.562900  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:56.946525  630019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.392017509s)
	I1025 10:20:56.946894  630019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.218766709s)
	I1025 10:20:56.951372  630019 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-667966 addons enable metrics-server
	
	I1025 10:20:56.953992  630019 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1025 10:20:55.838326  631515 cli_runner.go:164] Run: docker network inspect no-preload-899665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:20:55.866402  631515 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:20:55.871215  631515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:55.883474  631515 kubeadm.go:883] updating cluster {Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:20:55.883647  631515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:55.883698  631515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:55.918530  631515 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:55.918555  631515 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:20:55.918564  631515 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:20:55.918692  631515 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-899665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:20:55.918774  631515 ssh_runner.go:195] Run: crio config
	I1025 10:20:55.987775  631515 cni.go:84] Creating CNI manager for ""
	I1025 10:20:55.987800  631515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:55.987835  631515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:20:55.987866  631515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-899665 NodeName:no-preload-899665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:20:55.988045  631515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-899665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:20:55.988168  631515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:20:56.002469  631515 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:20:56.002547  631515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:20:56.012923  631515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:20:56.028715  631515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:20:56.044472  631515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 10:20:56.060245  631515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:20:56.064876  631515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:56.077354  631515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:56.208720  631515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:56.238889  631515 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665 for IP: 192.168.76.2
	I1025 10:20:56.238916  631515 certs.go:195] generating shared ca certs ...
	I1025 10:20:56.238936  631515 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:56.239091  631515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:20:56.239135  631515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:20:56.239144  631515 certs.go:257] generating profile certs ...
	I1025 10:20:56.239269  631515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/client.key
	I1025 10:20:56.239354  631515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/apiserver.key.3b890db5
	I1025 10:20:56.239404  631515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/proxy-client.key
	I1025 10:20:56.239554  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:20:56.239589  631515 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:20:56.239600  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:20:56.239628  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:20:56.239654  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:20:56.239680  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:20:56.239738  631515 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:20:56.240543  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:20:56.303265  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:20:56.337533  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:20:56.370571  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:20:56.414944  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:20:56.442232  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:20:56.465933  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:20:56.492999  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/no-preload-899665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:20:56.520005  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:20:56.548457  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:20:56.576657  631515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:20:56.603351  631515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:20:56.620679  631515 ssh_runner.go:195] Run: openssl version
	I1025 10:20:56.628144  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:20:56.640781  631515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:56.645951  631515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:56.646023  631515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:56.696572  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:20:56.708539  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:20:56.721638  631515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:20:56.727654  631515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:20:56.727720  631515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:20:56.787845  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:20:56.802257  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:20:56.815936  631515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:20:56.822761  631515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:20:56.822840  631515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:20:56.878515  631515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:20:56.892169  631515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:20:56.897844  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:20:56.960356  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:20:57.021535  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:20:57.073286  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:20:57.121738  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:20:57.173939  631515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:20:57.216691  631515 kubeadm.go:400] StartCluster: {Name:no-preload-899665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-899665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:57.216805  631515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:20:57.216883  631515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:20:57.255924  631515 cri.go:89] found id: "5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964"
	I1025 10:20:57.255965  631515 cri.go:89] found id: "352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f"
	I1025 10:20:57.255971  631515 cri.go:89] found id: "b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a"
	I1025 10:20:57.255976  631515 cri.go:89] found id: "f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793"
	I1025 10:20:57.255979  631515 cri.go:89] found id: ""
	I1025 10:20:57.256031  631515 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:20:57.274862  631515 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:20:57Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:20:57.274937  631515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:20:57.287218  631515 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:20:57.287249  631515 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:20:57.287310  631515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:20:57.299539  631515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:20:57.300269  631515 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-899665" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:57.300948  631515 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-899665" cluster setting kubeconfig missing "no-preload-899665" context setting]
	I1025 10:20:57.301940  631515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:57.303983  631515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:20:57.317622  631515 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:20:57.317674  631515 kubeadm.go:601] duration metric: took 30.418229ms to restartPrimaryControlPlane
	I1025 10:20:57.317690  631515 kubeadm.go:402] duration metric: took 101.010179ms to StartCluster
	I1025 10:20:57.317714  631515 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:57.317790  631515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:20:57.319898  631515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:57.320234  631515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:57.320527  631515 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:57.320635  631515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:57.320943  631515 addons.go:69] Setting dashboard=true in profile "no-preload-899665"
	I1025 10:20:57.320972  631515 addons.go:238] Setting addon dashboard=true in "no-preload-899665"
	W1025 10:20:57.320981  631515 addons.go:247] addon dashboard should already be in state true
	I1025 10:20:57.321132  631515 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:20:57.321068  631515 addons.go:69] Setting storage-provisioner=true in profile "no-preload-899665"
	I1025 10:20:57.321179  631515 addons.go:238] Setting addon storage-provisioner=true in "no-preload-899665"
	W1025 10:20:57.321193  631515 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:20:57.321235  631515 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:20:57.321062  631515 addons.go:69] Setting default-storageclass=true in profile "no-preload-899665"
	I1025 10:20:57.321352  631515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-899665"
	I1025 10:20:57.321680  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.321789  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.321805  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.326294  631515 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:57.347789  631515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:57.358399  631515 addons.go:238] Setting addon default-storageclass=true in "no-preload-899665"
	W1025 10:20:57.358485  631515 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:20:57.358533  631515 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:20:57.358719  631515 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:57.359240  631515 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:20:57.360118  631515 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:57.360276  631515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:57.360243  631515 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:20:57.360411  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:57.362921  631515 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:20:56.954911  630019 addons.go:514] duration metric: took 2.648067112s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1025 10:20:57.057882  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:57.064121  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:57.064153  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:57.557503  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:57.564175  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:57.564208  630019 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:58.057498  630019 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:20:58.062542  630019 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:20:58.064041  630019 api_server.go:141] control plane version: v1.34.1
	I1025 10:20:58.064072  630019 api_server.go:131] duration metric: took 3.507323093s to wait for apiserver health ...
	I1025 10:20:58.064084  630019 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:20:58.068404  630019 system_pods.go:59] 8 kube-system pods found
	I1025 10:20:58.068447  630019 system_pods.go:61] "coredns-66bc5c9577-r94h4" [2115a28b-31dc-4c2c-92cc-673a27e36bbf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:58.068459  630019 system_pods.go:61] "etcd-newest-cni-667966" [11d44ba6-f334-4879-aa97-64a7a7607270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:20:58.068467  630019 system_pods.go:61] "kindnet-srprb" [02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb] Running
	I1025 10:20:58.068476  630019 system_pods.go:61] "kube-apiserver-newest-cni-667966" [5cec7e59-41bf-413f-a61f-f10bb6663011] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:20:58.068485  630019 system_pods.go:61] "kube-controller-manager-newest-cni-667966" [ff16c3cb-b8d1-4823-a897-47d3d0e58335] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:20:58.068495  630019 system_pods.go:61] "kube-proxy-vngwv" [273b5cf5-0600-4009-bab3-06b3a900b02d] Running
	I1025 10:20:58.068500  630019 system_pods.go:61] "kube-scheduler-newest-cni-667966" [9aac2144-6942-4b66-9a48-0defb4aba756] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:20:58.068505  630019 system_pods.go:61] "storage-provisioner" [bd681a48-b157-41ff-b49f-5189827996b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:58.068513  630019 system_pods.go:74] duration metric: took 4.421663ms to wait for pod list to return data ...
	I1025 10:20:58.068527  630019 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:20:58.071278  630019 default_sa.go:45] found service account: "default"
	I1025 10:20:58.071305  630019 default_sa.go:55] duration metric: took 2.770038ms for default service account to be created ...
	I1025 10:20:58.071351  630019 kubeadm.go:586] duration metric: took 3.764635819s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:20:58.071377  630019 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:20:58.074474  630019 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:20:58.074513  630019 node_conditions.go:123] node cpu capacity is 8
	I1025 10:20:58.074532  630019 node_conditions.go:105] duration metric: took 3.14888ms to run NodePressure ...
	I1025 10:20:58.074548  630019 start.go:241] waiting for startup goroutines ...
	I1025 10:20:58.074557  630019 start.go:246] waiting for cluster config update ...
	I1025 10:20:58.074569  630019 start.go:255] writing updated cluster config ...
	I1025 10:20:58.074982  630019 ssh_runner.go:195] Run: rm -f paused
	I1025 10:20:58.140856  630019 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:20:58.143063  630019 out.go:179] * Done! kubectl is now configured to use "newest-cni-667966" cluster and "default" namespace by default
	W1025 10:20:55.767613  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:20:58.267561  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:20:57.364197  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:20:57.364217  631515 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:20:57.364282  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:57.396014  631515 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:57.396229  631515 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:57.396633  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:57.396760  631515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:20:57.398614  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:57.431274  631515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:20:57.534051  631515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:57.554695  631515 node_ready.go:35] waiting up to 6m0s for node "no-preload-899665" to be "Ready" ...
	I1025 10:20:57.559887  631515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:57.581786  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:20:57.581819  631515 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:20:57.582389  631515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:57.606873  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:20:57.606901  631515 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:20:57.630145  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:20:57.630271  631515 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:20:57.652887  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:20:57.652912  631515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:20:57.673346  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:20:57.673378  631515 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:20:57.688695  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:20:57.688722  631515 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:20:57.703680  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:20:57.703711  631515 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:20:57.718436  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:20:57.718462  631515 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:20:57.734340  631515 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:57.734407  631515 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:20:57.750529  631515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:20:59.122649  631515 node_ready.go:49] node "no-preload-899665" is "Ready"
	I1025 10:20:59.122746  631515 node_ready.go:38] duration metric: took 1.568013142s for node "no-preload-899665" to be "Ready" ...
	I1025 10:20:59.122770  631515 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:59.122852  631515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:59.779615  631515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.197192759s)
	I1025 10:20:59.779682  631515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.219744315s)
	I1025 10:20:59.779804  631515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.029238146s)
	I1025 10:20:59.779844  631515 api_server.go:72] duration metric: took 2.459570122s to wait for apiserver process to appear ...
	I1025 10:20:59.780303  631515 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:59.780343  631515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:20:59.781714  631515 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-899665 addons enable metrics-server
	
	I1025 10:20:59.786218  631515 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:20:59.786243  631515 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:20:59.791065  631515 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 10:21:00.765900  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:02.767489  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:20:59.792599  631515 addons.go:514] duration metric: took 2.471958498s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:21:00.280402  631515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:21:00.286226  631515 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:00.286266  631515 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:00.780872  631515 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:21:00.785264  631515 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:21:00.786398  631515 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:00.786425  631515 api_server.go:131] duration metric: took 1.00611446s to wait for apiserver health ...
	I1025 10:21:00.786435  631515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:00.789878  631515 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:00.789911  631515 system_pods.go:61] "coredns-66bc5c9577-gtnvx" [1a53a0ee-a470-493d-903e-89f7603b058d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:00.789919  631515 system_pods.go:61] "etcd-no-preload-899665" [bc328aec-c00c-4cda-9502-0ce8c5500d08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:00.789924  631515 system_pods.go:61] "kindnet-sjskf" [adca7025-fccd-45d0-858a-b64ea960ec85] Running
	I1025 10:21:00.789930  631515 system_pods.go:61] "kube-apiserver-no-preload-899665" [4a125733-8a94-4a25-b13e-39d03b9baca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:00.789936  631515 system_pods.go:61] "kube-controller-manager-no-preload-899665" [55994fc3-672d-4bc1-b04e-d2135639e71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:00.789941  631515 system_pods.go:61] "kube-proxy-fdthr" [aea032c1-4c95-4c86-81cc-1fd23a4a3440] Running
	I1025 10:21:00.789947  631515 system_pods.go:61] "kube-scheduler-no-preload-899665" [4c46cc41-0058-4e7b-9e34-99c91ded9149] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:00.789954  631515 system_pods.go:61] "storage-provisioner" [f2d8d6d3-7a6f-461b-9084-c640ecc14248] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:00.789960  631515 system_pods.go:74] duration metric: took 3.520343ms to wait for pod list to return data ...
	I1025 10:21:00.789968  631515 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:00.792722  631515 default_sa.go:45] found service account: "default"
	I1025 10:21:00.792748  631515 default_sa.go:55] duration metric: took 2.773494ms for default service account to be created ...
	I1025 10:21:00.792757  631515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:00.795779  631515 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:00.795828  631515 system_pods.go:89] "coredns-66bc5c9577-gtnvx" [1a53a0ee-a470-493d-903e-89f7603b058d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:00.795840  631515 system_pods.go:89] "etcd-no-preload-899665" [bc328aec-c00c-4cda-9502-0ce8c5500d08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:00.795848  631515 system_pods.go:89] "kindnet-sjskf" [adca7025-fccd-45d0-858a-b64ea960ec85] Running
	I1025 10:21:00.795856  631515 system_pods.go:89] "kube-apiserver-no-preload-899665" [4a125733-8a94-4a25-b13e-39d03b9baca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:00.795872  631515 system_pods.go:89] "kube-controller-manager-no-preload-899665" [55994fc3-672d-4bc1-b04e-d2135639e71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:00.795881  631515 system_pods.go:89] "kube-proxy-fdthr" [aea032c1-4c95-4c86-81cc-1fd23a4a3440] Running
	I1025 10:21:00.795886  631515 system_pods.go:89] "kube-scheduler-no-preload-899665" [4c46cc41-0058-4e7b-9e34-99c91ded9149] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:00.795891  631515 system_pods.go:89] "storage-provisioner" [f2d8d6d3-7a6f-461b-9084-c640ecc14248] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:00.795898  631515 system_pods.go:126] duration metric: took 3.135281ms to wait for k8s-apps to be running ...
	I1025 10:21:00.795909  631515 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:00.795961  631515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:00.811522  631515 system_svc.go:56] duration metric: took 15.60236ms WaitForService to wait for kubelet
	I1025 10:21:00.811555  631515 kubeadm.go:586] duration metric: took 3.491280563s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:00.811586  631515 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:00.814855  631515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:00.814910  631515 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:00.814929  631515 node_conditions.go:105] duration metric: took 3.335939ms to run NodePressure ...
	I1025 10:21:00.814946  631515 start.go:241] waiting for startup goroutines ...
	I1025 10:21:00.814956  631515 start.go:246] waiting for cluster config update ...
	I1025 10:21:00.814971  631515 start.go:255] writing updated cluster config ...
	I1025 10:21:00.815274  631515 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:00.820049  631515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:00.824865  631515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gtnvx" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:02.831461  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.796730113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.800654063Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cfd2d95e-fc7f-42b0-87ee-50cb0527469b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.801603126Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd7d8b0e-4637-4ec4-8021-29e88d46dbe6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.802971733Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.80351566Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.804030261Z" level=info msg="Ran pod sandbox 12c58a1bf5193964d2d7ccaffd71f203fe55cd1693de5312555216d90fb8a0be with infra container: kube-system/kube-proxy-vngwv/POD" id=cfd2d95e-fc7f-42b0-87ee-50cb0527469b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.804527544Z" level=info msg="Ran pod sandbox 72b11ed48bdf1f74a55c55568fed114aa4b3d7bedbc25067adc04ab97c3a4dcc with infra container: kube-system/kindnet-srprb/POD" id=cd7d8b0e-4637-4ec4-8021-29e88d46dbe6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.805925096Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ef71eb44-813f-4ce2-ae66-7c039aaf0769 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.80596876Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c786d602-591c-407d-9e3a-f77e8cbca9d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.807123005Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f7931f9d-b2cc-45a1-8b2b-a0c301097336 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.80717605Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=936e64b0-3d4b-46a4-9c7c-2763d1ecaf7a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.808408741Z" level=info msg="Creating container: kube-system/kube-proxy-vngwv/kube-proxy" id=611ed702-6ce9-420f-a05b-ff518a3b12f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.808553008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.809122778Z" level=info msg="Creating container: kube-system/kindnet-srprb/kindnet-cni" id=a219a0c6-b63b-4df4-bc2e-1b6a35d15529 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.809230771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.818828952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.819439568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.819535566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.819924837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.862883937Z" level=info msg="Created container 3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a: kube-system/kindnet-srprb/kindnet-cni" id=a219a0c6-b63b-4df4-bc2e-1b6a35d15529 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.86394886Z" level=info msg="Starting container: 3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a" id=c460c58d-b011-4ac0-aea9-29160fd98215 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.866557192Z" level=info msg="Started container" PID=1033 containerID=3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a description=kube-system/kindnet-srprb/kindnet-cni id=c460c58d-b011-4ac0-aea9-29160fd98215 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72b11ed48bdf1f74a55c55568fed114aa4b3d7bedbc25067adc04ab97c3a4dcc
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.870833231Z" level=info msg="Created container b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f: kube-system/kube-proxy-vngwv/kube-proxy" id=611ed702-6ce9-420f-a05b-ff518a3b12f9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.871782707Z" level=info msg="Starting container: b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f" id=53bd2088-9119-421f-90b6-f662c765b5fc name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:56 newest-cni-667966 crio[516]: time="2025-10-25T10:20:56.875881355Z" level=info msg="Started container" PID=1034 containerID=b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f description=kube-system/kube-proxy-vngwv/kube-proxy id=53bd2088-9119-421f-90b6-f662c765b5fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=12c58a1bf5193964d2d7ccaffd71f203fe55cd1693de5312555216d90fb8a0be
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3c68bd23f6660       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   72b11ed48bdf1       kindnet-srprb                               kube-system
	b05ffe134a05e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   12c58a1bf5193       kube-proxy-vngwv                            kube-system
	dc5e1fe15e732       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   40bc2a940a2ec       kube-scheduler-newest-cni-667966            kube-system
	9f8c1df6dfdf4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   5cb89e1d2c833       etcd-newest-cni-667966                      kube-system
	043d021586bed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   41640438055b1       kube-apiserver-newest-cni-667966            kube-system
	d1f99cc829179       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   989e151ffd5ee       kube-controller-manager-newest-cni-667966   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-667966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-667966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=newest-cni-667966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-667966
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:20:56 +0000   Sat, 25 Oct 2025 10:20:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-667966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                276bfa54-9db8-48b4-86d5-3278d4455526
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-667966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-srprb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-667966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-667966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-vngwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-667966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30s   kube-proxy       
	  Normal  Starting                 7s    kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node newest-cni-667966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node newest-cni-667966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node newest-cni-667966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node newest-cni-667966 event: Registered Node newest-cni-667966 in Controller
	  Normal  RegisteredNode           5s    node-controller  Node newest-cni-667966 event: Registered Node newest-cni-667966 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [9f8c1df6dfdf4d3f7a952f8fecf040c1639fbc9112d5b20da3d4311228fe970b] <==
	{"level":"warn","ts":"2025-10-25T10:20:55.489134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.509154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.528252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.535916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.544130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.552608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.560037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.568475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.576331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.587621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.594872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.602268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.610562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.617603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.625252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.633274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.640438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.648932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.657708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.665381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.673441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.693581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.700665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.707769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:55.769398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33716","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:04 up  2:03,  0 user,  load average: 5.91, 5.01, 5.96
	Linux newest-cni-667966 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c68bd23f6660eb6639e6181698b7136ae4ed8928495d52e175482795618807a] <==
	I1025 10:20:57.141110       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:57.142750       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 10:20:57.142912       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:57.142928       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:57.142955       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:20:57.539273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:57.540749       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:57.540777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1025 10:20:57.539976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:20:57.540005       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:20:57.540656       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:20:57.541162       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:20:58.841707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:20:58.841758       1 metrics.go:72] Registering metrics
	I1025 10:20:58.841837       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [043d021586bedd90d0ccb57b16a6588989a4f1d67466bdf08a11a2fad83d6525] <==
	I1025 10:20:56.337307       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:56.337314       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:56.337608       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 10:20:56.340013       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:20:56.346626       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:20:56.346804       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:20:56.349925       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:20:56.363422       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:20:56.371437       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:20:56.371537       1 policy_source.go:240] refreshing policies
	I1025 10:20:56.383458       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:56.387804       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:56.582240       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:56.645754       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:20:56.679497       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:56.704061       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:56.713358       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:56.772009       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.143"}
	I1025 10:20:56.784106       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.240.218"}
	I1025 10:20:57.242668       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:59.834841       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:00.185540       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:00.185540       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:00.235050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:21:00.235050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1f99cc829179c6c6f2484ba5bc57e6507269d2e725b6feddf3428922eceb51d] <==
	I1025 10:20:59.682119       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:20:59.682199       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:20:59.682508       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:20:59.683771       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:20:59.685566       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:20:59.688485       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:20:59.688589       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:20:59.688715       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-667966"
	I1025 10:20:59.688719       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:20:59.688769       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:20:59.688833       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:59.688837       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:20:59.688889       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:20:59.688896       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:20:59.688904       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:20:59.691119       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:20:59.692391       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:20:59.694989       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:20:59.695130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:20:59.697698       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:20:59.700113       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:20:59.704509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:59.704534       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:20:59.704546       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:20:59.710276       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b05ffe134a05ebdc673146b172ae89b63a2a4e55e75a9f8330b396ca51baaa1f] <==
	I1025 10:20:56.940947       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:57.019771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:57.120584       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:57.120627       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 10:20:57.120771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:20:57.169050       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:57.169183       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:20:57.174988       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:20:57.175532       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:20:57.175803       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:57.179736       1 config.go:309] "Starting node config controller"
	I1025 10:20:57.182436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:20:57.182478       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:20:57.181787       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:20:57.182555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:20:57.180254       1 config.go:200] "Starting service config controller"
	I1025 10:20:57.182661       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:20:57.180196       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:20:57.182735       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:20:57.283452       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:20:57.283521       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:20:57.283420       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dc5e1fe15e732a2803c1f34dbd191e88cbb7d2a206a70f2c5cceb65b9334f033] <==
	I1025 10:20:55.709585       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:20:57.100909       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:20:57.100938       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:57.106061       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:20:57.106235       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:20:57.106147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:20:57.106422       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:20:57.106115       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:57.106505       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:57.106613       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:20:57.106635       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:20:57.207282       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:57.207444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:20:57.207485       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 25 10:20:55 newest-cni-667966 kubelet[656]: E1025 10:20:55.567950     656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-667966\" not found" node="newest-cni-667966"
	Oct 25 10:20:55 newest-cni-667966 kubelet[656]: E1025 10:20:55.568243     656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-667966\" not found" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.150968     656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-667966\" not found" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.291425     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.417004     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-667966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.417227     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.425545     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-667966\" already exists" pod="kube-system/kube-scheduler-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.425598     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.435826     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-667966\" already exists" pod="kube-system/etcd-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.435871     656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: E1025 10:20:56.446523     656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-667966\" already exists" pod="kube-system/kube-apiserver-newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.482849     656 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.482966     656 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-667966"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.483015     656 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.484038     656 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.487014     656 apiserver.go:52] "Watching apiserver"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.492820     656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573264     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-xtables-lock\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573367     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/273b5cf5-0600-4009-bab3-06b3a900b02d-lib-modules\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573401     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-cni-cfg\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573424     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb-lib-modules\") pod \"kindnet-srprb\" (UID: \"02e64d9a-cbe3-4e98-81a0-7c609fa2b1bb\") " pod="kube-system/kindnet-srprb"
	Oct 25 10:20:56 newest-cni-667966 kubelet[656]: I1025 10:20:56.573480     656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/273b5cf5-0600-4009-bab3-06b3a900b02d-xtables-lock\") pod \"kube-proxy-vngwv\" (UID: \"273b5cf5-0600-4009-bab3-06b3a900b02d\") " pod="kube-system/kube-proxy-vngwv"
	Oct 25 10:20:59 newest-cni-667966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:20:59 newest-cni-667966 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:20:59 newest-cni-667966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-667966 -n newest-cni-667966
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-667966 -n newest-cni-667966: exit status 2 (424.212799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-667966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv: exit status 1 (90.32392ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-r94h4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-6q4tv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nlbwv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-667966 describe pod coredns-66bc5c9577-r94h4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6q4tv kubernetes-dashboard-855c9754f9-nlbwv: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-714798 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-714798 --alsologtostderr -v=1: exit status 80 (1.829203709s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-714798 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:21:28.660217  642351 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:28.660499  642351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:28.660510  642351 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:28.660517  642351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:28.660728  642351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:28.660991  642351 out.go:368] Setting JSON to false
	I1025 10:21:28.661046  642351 mustload.go:65] Loading cluster: old-k8s-version-714798
	I1025 10:21:28.661368  642351 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:28.661827  642351 cli_runner.go:164] Run: docker container inspect old-k8s-version-714798 --format={{.State.Status}}
	I1025 10:21:28.680743  642351 host.go:66] Checking if "old-k8s-version-714798" exists ...
	I1025 10:21:28.681127  642351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:28.746131  642351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-25 10:21:28.733241956 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:28.747074  642351 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-714798 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:21:28.749080  642351 out.go:179] * Pausing node old-k8s-version-714798 ... 
	I1025 10:21:28.751206  642351 host.go:66] Checking if "old-k8s-version-714798" exists ...
	I1025 10:21:28.751612  642351 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:28.751660  642351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714798
	I1025 10:21:28.771704  642351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/old-k8s-version-714798/id_rsa Username:docker}
	I1025 10:21:28.875460  642351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:28.912539  642351 pause.go:52] kubelet running: true
	I1025 10:21:28.912641  642351 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:29.095600  642351 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:29.095722  642351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:29.178066  642351 cri.go:89] found id: "83298f29677812bdb89aebe27bacd5765cc414cfbcb8ae3820f968d7dfb2a0a8"
	I1025 10:21:29.178095  642351 cri.go:89] found id: "553718397c387da8f5f2fcd092c2a59e58c71cc63b088ea724a3169ee7c5b5bc"
	I1025 10:21:29.178101  642351 cri.go:89] found id: "f986363d36450aecccdaa98aebe4eb5dbc429656a6bee1770bbfde083685da0c"
	I1025 10:21:29.178106  642351 cri.go:89] found id: "f4a2f7f040204ba504676eed9f3884012aeaf80acbd4821516096fc8bff9e833"
	I1025 10:21:29.178109  642351 cri.go:89] found id: "02ebd7cadca0e2f2e1a8fdb2d2a4025e434b7679c4e9c3329b85521f4edff815"
	I1025 10:21:29.178113  642351 cri.go:89] found id: "5538d92e1ad00d0b895ea0869e732ceaf8db5758c6940c69bb5d41a8e0661704"
	I1025 10:21:29.178117  642351 cri.go:89] found id: "bbd6a05e151245b4f918254624d45abfaa66832cc221e776d8265d0e8fa29750"
	I1025 10:21:29.178120  642351 cri.go:89] found id: "ce12ceda5c77bef4710f4a8f8a5a88ca899e512d3d2151b06751ca05f3184af3"
	I1025 10:21:29.178129  642351 cri.go:89] found id: "b25eb7cda6de2aff244793687094ba7b3ca70cb7a03ef1adb707e0d582e0580e"
	I1025 10:21:29.178144  642351 cri.go:89] found id: "2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	I1025 10:21:29.178149  642351 cri.go:89] found id: "023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3"
	I1025 10:21:29.178154  642351 cri.go:89] found id: ""
	I1025 10:21:29.178199  642351 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:29.193592  642351 retry.go:31] will retry after 250.829978ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:29Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:29.445138  642351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:29.460763  642351 pause.go:52] kubelet running: false
	I1025 10:21:29.460830  642351 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:29.610205  642351 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:29.610344  642351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:29.690868  642351 cri.go:89] found id: "83298f29677812bdb89aebe27bacd5765cc414cfbcb8ae3820f968d7dfb2a0a8"
	I1025 10:21:29.690892  642351 cri.go:89] found id: "553718397c387da8f5f2fcd092c2a59e58c71cc63b088ea724a3169ee7c5b5bc"
	I1025 10:21:29.690896  642351 cri.go:89] found id: "f986363d36450aecccdaa98aebe4eb5dbc429656a6bee1770bbfde083685da0c"
	I1025 10:21:29.690899  642351 cri.go:89] found id: "f4a2f7f040204ba504676eed9f3884012aeaf80acbd4821516096fc8bff9e833"
	I1025 10:21:29.690902  642351 cri.go:89] found id: "02ebd7cadca0e2f2e1a8fdb2d2a4025e434b7679c4e9c3329b85521f4edff815"
	I1025 10:21:29.690905  642351 cri.go:89] found id: "5538d92e1ad00d0b895ea0869e732ceaf8db5758c6940c69bb5d41a8e0661704"
	I1025 10:21:29.690908  642351 cri.go:89] found id: "bbd6a05e151245b4f918254624d45abfaa66832cc221e776d8265d0e8fa29750"
	I1025 10:21:29.690910  642351 cri.go:89] found id: "ce12ceda5c77bef4710f4a8f8a5a88ca899e512d3d2151b06751ca05f3184af3"
	I1025 10:21:29.690913  642351 cri.go:89] found id: "b25eb7cda6de2aff244793687094ba7b3ca70cb7a03ef1adb707e0d582e0580e"
	I1025 10:21:29.690919  642351 cri.go:89] found id: "2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	I1025 10:21:29.690922  642351 cri.go:89] found id: "023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3"
	I1025 10:21:29.690925  642351 cri.go:89] found id: ""
	I1025 10:21:29.690978  642351 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:29.704701  642351 retry.go:31] will retry after 433.008535ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:29Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:30.138394  642351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:30.153829  642351 pause.go:52] kubelet running: false
	I1025 10:21:30.153897  642351 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:30.315575  642351 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:30.315670  642351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:30.396814  642351 cri.go:89] found id: "83298f29677812bdb89aebe27bacd5765cc414cfbcb8ae3820f968d7dfb2a0a8"
	I1025 10:21:30.396837  642351 cri.go:89] found id: "553718397c387da8f5f2fcd092c2a59e58c71cc63b088ea724a3169ee7c5b5bc"
	I1025 10:21:30.396841  642351 cri.go:89] found id: "f986363d36450aecccdaa98aebe4eb5dbc429656a6bee1770bbfde083685da0c"
	I1025 10:21:30.396844  642351 cri.go:89] found id: "f4a2f7f040204ba504676eed9f3884012aeaf80acbd4821516096fc8bff9e833"
	I1025 10:21:30.396847  642351 cri.go:89] found id: "02ebd7cadca0e2f2e1a8fdb2d2a4025e434b7679c4e9c3329b85521f4edff815"
	I1025 10:21:30.396850  642351 cri.go:89] found id: "5538d92e1ad00d0b895ea0869e732ceaf8db5758c6940c69bb5d41a8e0661704"
	I1025 10:21:30.396852  642351 cri.go:89] found id: "bbd6a05e151245b4f918254624d45abfaa66832cc221e776d8265d0e8fa29750"
	I1025 10:21:30.396854  642351 cri.go:89] found id: "ce12ceda5c77bef4710f4a8f8a5a88ca899e512d3d2151b06751ca05f3184af3"
	I1025 10:21:30.396857  642351 cri.go:89] found id: "b25eb7cda6de2aff244793687094ba7b3ca70cb7a03ef1adb707e0d582e0580e"
	I1025 10:21:30.396868  642351 cri.go:89] found id: "2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	I1025 10:21:30.396871  642351 cri.go:89] found id: "023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3"
	I1025 10:21:30.396873  642351 cri.go:89] found id: ""
	I1025 10:21:30.396922  642351 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:30.414184  642351 out.go:203] 
	W1025 10:21:30.415472  642351 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:21:30.415489  642351 out.go:285] * 
	* 
	W1025 10:21:30.419653  642351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:30.420958  642351 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-714798 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-714798
helpers_test.go:243: (dbg) docker inspect old-k8s-version-714798:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb",
	        "Created": "2025-10-25T10:19:03.747366257Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:23.661708217Z",
	            "FinishedAt": "2025-10-25T10:20:22.439232386Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/hosts",
	        "LogPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb-json.log",
	        "Name": "/old-k8s-version-714798",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-714798:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-714798",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb",
	                "LowerDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-714798",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-714798/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-714798",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-714798",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-714798",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e14c14548a217e08acad70a94ff612b8194ce10d18e44d38b1610ff6ad44411",
	            "SandboxKey": "/var/run/docker/netns/6e14c14548a2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-714798": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:07:d1:a3:ed:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc93092e09ae8d654ec66b5e009efa3952011514f4834e7a4c9ac844956e7c64",
	                    "EndpointID": "1191ec2278d7b3d2d4eaf7d26d25e09f27426e8e73a0abef25c8752b85349e20",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-714798",
	                        "0ea7bd002b13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798: exit status 2 (363.222672ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-714798 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-714798 logs -n 25: (1.392192066s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 25 10:20:50 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:50.52557564Z" level=info msg="Starting container: 3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a" id=05dc4ad7-7540-44ab-b9da-773fa1bcca4f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:50 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:50.528381763Z" level=info msg="Started container" PID=1667 containerID=3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper id=05dc4ad7-7540-44ab-b9da-773fa1bcca4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c02df6df091e149755ea16998551388180b1ae68589d0a50e2ed2f45de2124e7
	Oct 25 10:20:51 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:51.491476818Z" level=info msg="Removing container: 636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827" id=f8216eae-b8d3-4f35-96df-8182f66d2f23 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:20:51 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:51.504038139Z" level=info msg="Removed container 636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=f8216eae-b8d3-4f35-96df-8182f66d2f23 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.488466798Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=726b2567-8cb8-4c14-856a-246195d3ce4a name=/runtime.v1.ImageService/PullImage
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.490553401Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5c87eac5-3d86-4dca-9acc-7b617814b016 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.493144135Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4/kubernetes-dashboard" id=1f36e9c2-564a-4575-be0b-b6377011d919 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.494261163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.502573691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.502948542Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3fcc4b58b1a8e5da26fb2264a5f7e6c09b6ed60883f1b43b667fa39fec9755e9/merged/etc/group: no such file or directory"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.503532006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.533709935Z" level=info msg="Created container 023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4/kubernetes-dashboard" id=1f36e9c2-564a-4575-be0b-b6377011d919 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.53458072Z" level=info msg="Starting container: 023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3" id=714ee3ee-9044-42e5-9c65-bc47d3a73d26 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.537180123Z" level=info msg="Started container" PID=1716 containerID=023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4/kubernetes-dashboard id=714ee3ee-9044-42e5-9c65-bc47d3a73d26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9caf4a77f26bc21c5423beeb1b922fc9163c0a010fd8ac7f1aa0c0dd55e215f6
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.380626116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=204688f5-7ffe-46d6-b56e-fb5c51f84669 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.383625717Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b201715e-86d9-40a4-9b59-7c61ffdf76a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.385495169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=f523e15f-e24c-4d8d-84de-96f831063eec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.385661121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.395894289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.39673567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.435573403Z" level=info msg="Created container 2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=f523e15f-e24c-4d8d-84de-96f831063eec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.436364114Z" level=info msg="Starting container: 2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368" id=0be372d0-5f5d-4836-864a-3ad173130492 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.438388091Z" level=info msg="Started container" PID=1735 containerID=2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper id=0be372d0-5f5d-4836-864a-3ad173130492 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c02df6df091e149755ea16998551388180b1ae68589d0a50e2ed2f45de2124e7
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.559779502Z" level=info msg="Removing container: 3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a" id=89c7faf2-032a-4d03-962c-a6c0cbb55db8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.57197179Z" level=info msg="Removed container 3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=89c7faf2-032a-4d03-962c-a6c0cbb55db8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2867ca1d41946       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   c02df6df091e1       dashboard-metrics-scraper-5f989dc9cf-nbn6r       kubernetes-dashboard
	023f9eec31a02       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago       Running             kubernetes-dashboard        0                   9caf4a77f26bc       kubernetes-dashboard-8694d4445c-mshs4            kubernetes-dashboard
	83298f2967781       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Running             storage-provisioner         1                   4607ea6244f35       storage-provisioner                              kube-system
	553718397c387       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   37dd48d1ba5b4       coredns-5dd5756b68-k5644                         kube-system
	93c3d9ff32729       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   fa285afeb70aa       busybox                                          default
	f986363d36450       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   4607ea6244f35       storage-provisioner                              kube-system
	f4a2f7f040204       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   24837f8a957eb       kindnet-g9r7c                                    kube-system
	02ebd7cadca0e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   bfb797eeb5c7f       kube-proxy-kqg7q                                 kube-system
	5538d92e1ad00       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   ed87d2b77bc52       kube-apiserver-old-k8s-version-714798            kube-system
	bbd6a05e15124       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   9659e8b2febb4       kube-scheduler-old-k8s-version-714798            kube-system
	ce12ceda5c77b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   5196626a8cf61       kube-controller-manager-old-k8s-version-714798   kube-system
	b25eb7cda6de2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   2e7c3d6d2c900       etcd-old-k8s-version-714798                      kube-system
	
	
	==> coredns [553718397c387da8f5f2fcd092c2a59e58c71cc63b088ea724a3169ee7c5b5bc] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55810 - 64319 "HINFO IN 848335762832656212.1076516786252787776. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.070833756s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-714798
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-714798
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-714798
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_19_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:19:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-714798
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-714798
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                ae2946a1-bd36-4e8d-a493-cdd7e65b514c
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-5dd5756b68-k5644                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	  kube-system                 etcd-old-k8s-version-714798                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m10s
	  kube-system                 kindnet-g9r7c                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-old-k8s-version-714798             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-old-k8s-version-714798    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-kqg7q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-old-k8s-version-714798             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-nbn6r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mshs4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s              kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s              kubelet          Node old-k8s-version-714798 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s              kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m10s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           118s               node-controller  Node old-k8s-version-714798 event: Registered Node old-k8s-version-714798 in Controller
	  Normal  NodeReady                103s               kubelet          Node old-k8s-version-714798 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-714798 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-714798 event: Registered Node old-k8s-version-714798 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [b25eb7cda6de2aff244793687094ba7b3ca70cb7a03ef1adb707e0d582e0580e] <==
	{"level":"info","ts":"2025-10-25T10:20:30.984904Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:20:30.984916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:20:30.984964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:20:30.985799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T10:20:30.985958Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:20:30.985995Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:20:30.986085Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T10:20:30.986119Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:20:31.966645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T10:20:31.966713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:20:31.966747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:20:31.966766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.966771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.96678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.966788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.967693Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-714798 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:20:31.967707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:20:31.967727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:20:31.968003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:20:31.968056Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T10:20:31.969094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:20:31.969441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-25T10:21:14.060499Z","caller":"traceutil/trace.go:171","msg":"trace[1284681663] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"114.245708ms","start":"2025-10-25T10:21:13.946229Z","end":"2025-10-25T10:21:14.060475Z","steps":["trace[1284681663] 'process raft request'  (duration: 75.802471ms)","trace[1284681663] 'compare'  (duration: 38.008006ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:14.328719Z","caller":"traceutil/trace.go:171","msg":"trace[256172037] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"113.471681ms","start":"2025-10-25T10:21:14.215198Z","end":"2025-10-25T10:21:14.328669Z","steps":["trace[256172037] 'process raft request'  (duration: 105.784211ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:14.329446Z","caller":"traceutil/trace.go:171","msg":"trace[991438834] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"111.040832ms","start":"2025-10-25T10:21:14.21839Z","end":"2025-10-25T10:21:14.329431Z","steps":["trace[991438834] 'process raft request'  (duration: 110.077934ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:21:31 up  2:04,  0 user,  load average: 7.25, 5.41, 6.07
	Linux old-k8s-version-714798 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4a2f7f040204ba504676eed9f3884012aeaf80acbd4821516096fc8bff9e833] <==
	I1025 10:20:34.092063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:34.092795       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:20:34.096587       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:34.096694       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:34.096750       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:20:34.391904       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:34.392043       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:34.392059       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:20:34.392234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:20:34.774694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:20:34.774732       1 metrics.go:72] Registering metrics
	I1025 10:20:34.774815       1 controller.go:711] "Syncing nftables rules"
	I1025 10:20:44.392468       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:20:44.392534       1 main.go:301] handling current node
	I1025 10:20:54.392443       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:20:54.392486       1 main.go:301] handling current node
	I1025 10:21:04.392189       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:21:04.392245       1 main.go:301] handling current node
	I1025 10:21:14.391866       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:21:14.391914       1 main.go:301] handling current node
	I1025 10:21:24.398432       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:21:24.398482       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5538d92e1ad00d0b895ea0869e732ceaf8db5758c6940c69bb5d41a8e0661704] <==
	I1025 10:20:33.049080       1 naming_controller.go:291] Starting NamingConditionController
	I1025 10:20:33.127448       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:20:33.148830       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:20:33.150952       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 10:20:33.151050       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 10:20:33.151096       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:20:33.151145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:20:33.151606       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:20:33.151618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:20:33.151673       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:20:33.151683       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:20:33.151690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:33.151698       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:33.213390       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:34.059657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:34.538795       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:20:34.585125       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:20:34.611403       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:34.621739       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:34.630262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:20:34.677754       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.182.56"}
	I1025 10:20:34.698095       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.186.43"}
	I1025 10:20:46.325178       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:20:46.523676       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:20:46.573237       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ce12ceda5c77bef4710f4a8f8a5a88ca899e512d3d2151b06751ca05f3184af3] <==
	I1025 10:20:46.529761       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1025 10:20:46.628284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="308.369423ms"
	I1025 10:20:46.628463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.318µs"
	I1025 10:20:46.631176       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mshs4"
	I1025 10:20:46.631263       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-nbn6r"
	I1025 10:20:46.637237       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:20:46.637270       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:20:46.639843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="110.815791ms"
	I1025 10:20:46.639872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.015041ms"
	I1025 10:20:46.648267       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:20:46.649217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="9.300581ms"
	I1025 10:20:46.651426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="2.145347ms"
	I1025 10:20:46.651806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.893207ms"
	I1025 10:20:46.651910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.05µs"
	I1025 10:20:46.655131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.956µs"
	I1025 10:20:46.664570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.098µs"
	I1025 10:20:50.498676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.828µs"
	I1025 10:20:51.505940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.719µs"
	I1025 10:20:52.554878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="134.985µs"
	I1025 10:20:54.566074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.603602ms"
	I1025 10:20:54.566199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.968µs"
	I1025 10:21:10.573448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="122.271µs"
	I1025 10:21:14.212352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.049718ms"
	I1025 10:21:14.212486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.409µs"
	I1025 10:21:16.954981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.641µs"
	
	
	==> kube-proxy [02ebd7cadca0e2f2e1a8fdb2d2a4025e434b7679c4e9c3329b85521f4edff815] <==
	I1025 10:20:33.925955       1 server_others.go:69] "Using iptables proxy"
	I1025 10:20:33.954592       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:20:34.012444       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:34.018918       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:20:34.018969       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:20:34.018980       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:20:34.019022       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:20:34.019681       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:20:34.019991       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:34.023492       1 config.go:188] "Starting service config controller"
	I1025 10:20:34.023569       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:20:34.023680       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:20:34.023708       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:20:34.024099       1 config.go:315] "Starting node config controller"
	I1025 10:20:34.024557       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:20:34.125374       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:20:34.125592       1 shared_informer.go:318] Caches are synced for node config
	I1025 10:20:34.125617       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bbd6a05e151245b4f918254624d45abfaa66832cc221e776d8265d0e8fa29750] <==
	I1025 10:20:31.302881       1 serving.go:348] Generated self-signed cert in-memory
	W1025 10:20:33.096356       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:20:33.096397       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:20:33.096521       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:20:33.096537       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:20:33.119067       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 10:20:33.119120       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:33.126133       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:33.126302       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:20:33.135195       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 10:20:33.135563       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 10:20:33.227639       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781532     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stfj7\" (UniqueName: \"kubernetes.io/projected/633c266d-f837-432b-843f-b86244518663-kube-api-access-stfj7\") pod \"kubernetes-dashboard-8694d4445c-mshs4\" (UID: \"633c266d-f837-432b-843f-b86244518663\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4"
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781617     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/633c266d-f837-432b-843f-b86244518663-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mshs4\" (UID: \"633c266d-f837-432b-843f-b86244518663\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4"
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781657     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfg5w\" (UniqueName: \"kubernetes.io/projected/f45f31cd-c886-48b9-8f84-70ac66dac634-kube-api-access-rfg5w\") pod \"dashboard-metrics-scraper-5f989dc9cf-nbn6r\" (UID: \"f45f31cd-c886-48b9-8f84-70ac66dac634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r"
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781784     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f45f31cd-c886-48b9-8f84-70ac66dac634-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-nbn6r\" (UID: \"f45f31cd-c886-48b9-8f84-70ac66dac634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r"
	Oct 25 10:20:50 old-k8s-version-714798 kubelet[710]: I1025 10:20:50.484823     710 scope.go:117] "RemoveContainer" containerID="636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827"
	Oct 25 10:20:51 old-k8s-version-714798 kubelet[710]: I1025 10:20:51.489568     710 scope.go:117] "RemoveContainer" containerID="636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827"
	Oct 25 10:20:51 old-k8s-version-714798 kubelet[710]: I1025 10:20:51.489933     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:20:51 old-k8s-version-714798 kubelet[710]: E1025 10:20:51.490349     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:20:52 old-k8s-version-714798 kubelet[710]: I1025 10:20:52.493810     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:20:52 old-k8s-version-714798 kubelet[710]: E1025 10:20:52.494212     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:20:54 old-k8s-version-714798 kubelet[710]: I1025 10:20:54.522436     710 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4" podStartSLOduration=2.00098375 podCreationTimestamp="2025-10-25 10:20:46 +0000 UTC" firstStartedPulling="2025-10-25 10:20:46.968470366 +0000 UTC m=+16.696572504" lastFinishedPulling="2025-10-25 10:20:53.48979132 +0000 UTC m=+23.217893460" observedRunningTime="2025-10-25 10:20:54.521700913 +0000 UTC m=+24.249803055" watchObservedRunningTime="2025-10-25 10:20:54.522304706 +0000 UTC m=+24.250406863"
	Oct 25 10:20:56 old-k8s-version-714798 kubelet[710]: I1025 10:20:56.943488     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:20:56 old-k8s-version-714798 kubelet[710]: E1025 10:20:56.943883     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: I1025 10:21:10.377714     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: I1025 10:21:10.558258     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: I1025 10:21:10.558554     710 scope.go:117] "RemoveContainer" containerID="2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: E1025 10:21:10.558957     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:16 old-k8s-version-714798 kubelet[710]: I1025 10:21:16.942492     710 scope.go:117] "RemoveContainer" containerID="2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	Oct 25 10:21:16 old-k8s-version-714798 kubelet[710]: E1025 10:21:16.943025     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:28 old-k8s-version-714798 kubelet[710]: I1025 10:21:28.377693     710 scope.go:117] "RemoveContainer" containerID="2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	Oct 25 10:21:28 old-k8s-version-714798 kubelet[710]: E1025 10:21:28.378140     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: kubelet.service: Consumed 1.800s CPU time.
	
	
	==> kubernetes-dashboard [023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3] <==
	2025/10/25 10:20:53 Starting overwatch
	2025/10/25 10:20:53 Using namespace: kubernetes-dashboard
	2025/10/25 10:20:53 Using in-cluster config to connect to apiserver
	2025/10/25 10:20:53 Using secret token for csrf signing
	2025/10/25 10:20:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:20:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:20:53 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:20:53 Generating JWE encryption key
	2025/10/25 10:20:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:20:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:20:53 Initializing JWE encryption key from synchronized object
	2025/10/25 10:20:53 Creating in-cluster Sidecar client
	2025/10/25 10:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:20:53 Serving insecurely on HTTP port: 9090
	2025/10/25 10:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [83298f29677812bdb89aebe27bacd5765cc414cfbcb8ae3820f968d7dfb2a0a8] <==
	I1025 10:20:34.506706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:20:34.524999       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:20:34.525072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:20:51.933825       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:20:51.933896       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"989e4d24-c526-4f80-8238-4bbd30d72adb", APIVersion:"v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-714798_d6752aae-7aea-4dfb-845c-f0d95077fb09 became leader
	I1025 10:20:51.933963       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714798_d6752aae-7aea-4dfb-845c-f0d95077fb09!
	I1025 10:20:52.034479       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714798_d6752aae-7aea-4dfb-845c-f0d95077fb09!
	
	
	==> storage-provisioner [f986363d36450aecccdaa98aebe4eb5dbc429656a6bee1770bbfde083685da0c] <==
	I1025 10:20:33.923498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:20:33.925778       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-714798 -n old-k8s-version-714798
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-714798 -n old-k8s-version-714798: exit status 2 (422.088198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-714798 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-714798
helpers_test.go:243: (dbg) docker inspect old-k8s-version-714798:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb",
	        "Created": "2025-10-25T10:19:03.747366257Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:23.661708217Z",
	            "FinishedAt": "2025-10-25T10:20:22.439232386Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/hosts",
	        "LogPath": "/var/lib/docker/containers/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb/0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb-json.log",
	        "Name": "/old-k8s-version-714798",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-714798:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-714798",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ea7bd002b137c3f6132bee18d30042afc6bb85a08179349eb3f67fde3a86ecb",
	                "LowerDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caac5b3fb2b5e719c459568c7f64a1473d2acbb34aff947f1f76651aa0e47b7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-714798",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-714798/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-714798",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-714798",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-714798",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e14c14548a217e08acad70a94ff612b8194ce10d18e44d38b1610ff6ad44411",
	            "SandboxKey": "/var/run/docker/netns/6e14c14548a2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-714798": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:07:d1:a3:ed:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc93092e09ae8d654ec66b5e009efa3952011514f4834e7a4c9ac844956e7c64",
	                    "EndpointID": "1191ec2278d7b3d2d4eaf7d26d25e09f27426e8e73a0abef25c8752b85349e20",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-714798",
	                        "0ea7bd002b13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798: exit status 2 (388.951972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-714798 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-714798 logs -n 25: (1.371110446s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p flannel-119085                                                                                                                                                                                                                             │ flannel-119085               │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p old-k8s-version-714798 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 25 10:20:50 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:50.52557564Z" level=info msg="Starting container: 3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a" id=05dc4ad7-7540-44ab-b9da-773fa1bcca4f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:50 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:50.528381763Z" level=info msg="Started container" PID=1667 containerID=3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper id=05dc4ad7-7540-44ab-b9da-773fa1bcca4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c02df6df091e149755ea16998551388180b1ae68589d0a50e2ed2f45de2124e7
	Oct 25 10:20:51 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:51.491476818Z" level=info msg="Removing container: 636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827" id=f8216eae-b8d3-4f35-96df-8182f66d2f23 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:20:51 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:51.504038139Z" level=info msg="Removed container 636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=f8216eae-b8d3-4f35-96df-8182f66d2f23 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.488466798Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=726b2567-8cb8-4c14-856a-246195d3ce4a name=/runtime.v1.ImageService/PullImage
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.490553401Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5c87eac5-3d86-4dca-9acc-7b617814b016 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.493144135Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4/kubernetes-dashboard" id=1f36e9c2-564a-4575-be0b-b6377011d919 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.494261163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.502573691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.502948542Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3fcc4b58b1a8e5da26fb2264a5f7e6c09b6ed60883f1b43b667fa39fec9755e9/merged/etc/group: no such file or directory"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.503532006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.533709935Z" level=info msg="Created container 023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4/kubernetes-dashboard" id=1f36e9c2-564a-4575-be0b-b6377011d919 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.53458072Z" level=info msg="Starting container: 023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3" id=714ee3ee-9044-42e5-9c65-bc47d3a73d26 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:53 old-k8s-version-714798 crio[559]: time="2025-10-25T10:20:53.537180123Z" level=info msg="Started container" PID=1716 containerID=023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4/kubernetes-dashboard id=714ee3ee-9044-42e5-9c65-bc47d3a73d26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9caf4a77f26bc21c5423beeb1b922fc9163c0a010fd8ac7f1aa0c0dd55e215f6
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.380626116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=204688f5-7ffe-46d6-b56e-fb5c51f84669 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.383625717Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b201715e-86d9-40a4-9b59-7c61ffdf76a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.385495169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=f523e15f-e24c-4d8d-84de-96f831063eec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.385661121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.395894289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.39673567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.435573403Z" level=info msg="Created container 2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=f523e15f-e24c-4d8d-84de-96f831063eec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.436364114Z" level=info msg="Starting container: 2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368" id=0be372d0-5f5d-4836-864a-3ad173130492 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.438388091Z" level=info msg="Started container" PID=1735 containerID=2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper id=0be372d0-5f5d-4836-864a-3ad173130492 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c02df6df091e149755ea16998551388180b1ae68589d0a50e2ed2f45de2124e7
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.559779502Z" level=info msg="Removing container: 3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a" id=89c7faf2-032a-4d03-962c-a6c0cbb55db8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:10 old-k8s-version-714798 crio[559]: time="2025-10-25T10:21:10.57197179Z" level=info msg="Removed container 3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r/dashboard-metrics-scraper" id=89c7faf2-032a-4d03-962c-a6c0cbb55db8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2867ca1d41946       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   c02df6df091e1       dashboard-metrics-scraper-5f989dc9cf-nbn6r       kubernetes-dashboard
	023f9eec31a02       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago       Running             kubernetes-dashboard        0                   9caf4a77f26bc       kubernetes-dashboard-8694d4445c-mshs4            kubernetes-dashboard
	83298f2967781       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Running             storage-provisioner         1                   4607ea6244f35       storage-provisioner                              kube-system
	553718397c387       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           About a minute ago   Running             coredns                     0                   37dd48d1ba5b4       coredns-5dd5756b68-k5644                         kube-system
	93c3d9ff32729       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   fa285afeb70aa       busybox                                          default
	f986363d36450       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   4607ea6244f35       storage-provisioner                              kube-system
	f4a2f7f040204       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   24837f8a957eb       kindnet-g9r7c                                    kube-system
	02ebd7cadca0e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           About a minute ago   Running             kube-proxy                  0                   bfb797eeb5c7f       kube-proxy-kqg7q                                 kube-system
	5538d92e1ad00       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   ed87d2b77bc52       kube-apiserver-old-k8s-version-714798            kube-system
	bbd6a05e15124       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   9659e8b2febb4       kube-scheduler-old-k8s-version-714798            kube-system
	ce12ceda5c77b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   5196626a8cf61       kube-controller-manager-old-k8s-version-714798   kube-system
	b25eb7cda6de2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   2e7c3d6d2c900       etcd-old-k8s-version-714798                      kube-system
	
	
	==> coredns [553718397c387da8f5f2fcd092c2a59e58c71cc63b088ea724a3169ee7c5b5bc] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55810 - 64319 "HINFO IN 848335762832656212.1076516786252787776. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.070833756s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-714798
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-714798
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-714798
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_19_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:19:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-714798
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:03 +0000   Sat, 25 Oct 2025 10:19:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-714798
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                ae2946a1-bd36-4e8d-a493-cdd7e65b514c
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-5dd5756b68-k5644                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m
	  kube-system                 etcd-old-k8s-version-714798                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m13s
	  kube-system                 kindnet-g9r7c                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-old-k8s-version-714798             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-old-k8s-version-714798    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-kqg7q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-old-k8s-version-714798             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-nbn6r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mshs4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 118s               kube-proxy       
	  Normal  Starting                 60s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s              kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s              kubelet          Node old-k8s-version-714798 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s              kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m1s               node-controller  Node old-k8s-version-714798 event: Registered Node old-k8s-version-714798 in Controller
	  Normal  NodeReady                106s               kubelet          Node old-k8s-version-714798 status is now: NodeReady
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node old-k8s-version-714798 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node old-k8s-version-714798 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node old-k8s-version-714798 event: Registered Node old-k8s-version-714798 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [b25eb7cda6de2aff244793687094ba7b3ca70cb7a03ef1adb707e0d582e0580e] <==
	{"level":"info","ts":"2025-10-25T10:20:30.984904Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:20:30.984916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:20:30.984964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:20:30.985799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T10:20:30.985958Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:20:30.985995Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:20:30.986085Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T10:20:30.986119Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:20:31.966645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T10:20:31.966713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:20:31.966747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:20:31.966766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.966771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.96678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.966788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:20:31.967693Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-714798 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:20:31.967707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:20:31.967727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:20:31.968003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:20:31.968056Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T10:20:31.969094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:20:31.969441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-25T10:21:14.060499Z","caller":"traceutil/trace.go:171","msg":"trace[1284681663] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"114.245708ms","start":"2025-10-25T10:21:13.946229Z","end":"2025-10-25T10:21:14.060475Z","steps":["trace[1284681663] 'process raft request'  (duration: 75.802471ms)","trace[1284681663] 'compare'  (duration: 38.008006ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:14.328719Z","caller":"traceutil/trace.go:171","msg":"trace[256172037] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"113.471681ms","start":"2025-10-25T10:21:14.215198Z","end":"2025-10-25T10:21:14.328669Z","steps":["trace[256172037] 'process raft request'  (duration: 105.784211ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:14.329446Z","caller":"traceutil/trace.go:171","msg":"trace[991438834] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"111.040832ms","start":"2025-10-25T10:21:14.21839Z","end":"2025-10-25T10:21:14.329431Z","steps":["trace[991438834] 'process raft request'  (duration: 110.077934ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:21:34 up  2:04,  0 user,  load average: 7.39, 5.47, 6.08
	Linux old-k8s-version-714798 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4a2f7f040204ba504676eed9f3884012aeaf80acbd4821516096fc8bff9e833] <==
	I1025 10:20:34.092063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:20:34.092795       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:20:34.096587       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:20:34.096694       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:20:34.096750       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:20:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:20:34.391904       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:20:34.392043       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:20:34.392059       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:20:34.392234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:20:34.774694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:20:34.774732       1 metrics.go:72] Registering metrics
	I1025 10:20:34.774815       1 controller.go:711] "Syncing nftables rules"
	I1025 10:20:44.392468       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:20:44.392534       1 main.go:301] handling current node
	I1025 10:20:54.392443       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:20:54.392486       1 main.go:301] handling current node
	I1025 10:21:04.392189       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:21:04.392245       1 main.go:301] handling current node
	I1025 10:21:14.391866       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:21:14.391914       1 main.go:301] handling current node
	I1025 10:21:24.398432       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:21:24.398482       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5538d92e1ad00d0b895ea0869e732ceaf8db5758c6940c69bb5d41a8e0661704] <==
	I1025 10:20:33.049080       1 naming_controller.go:291] Starting NamingConditionController
	I1025 10:20:33.127448       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:20:33.148830       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:20:33.150952       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 10:20:33.151050       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 10:20:33.151096       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:20:33.151145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:20:33.151606       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:20:33.151618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:20:33.151673       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:20:33.151683       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:20:33.151690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:33.151698       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:33.213390       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:34.059657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:34.538795       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:20:34.585125       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:20:34.611403       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:34.621739       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:34.630262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:20:34.677754       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.182.56"}
	I1025 10:20:34.698095       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.186.43"}
	I1025 10:20:46.325178       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:20:46.523676       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:20:46.573237       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ce12ceda5c77bef4710f4a8f8a5a88ca899e512d3d2151b06751ca05f3184af3] <==
	I1025 10:20:46.529761       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1025 10:20:46.628284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="308.369423ms"
	I1025 10:20:46.628463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.318µs"
	I1025 10:20:46.631176       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mshs4"
	I1025 10:20:46.631263       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-nbn6r"
	I1025 10:20:46.637237       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:20:46.637270       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:20:46.639843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="110.815791ms"
	I1025 10:20:46.639872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.015041ms"
	I1025 10:20:46.648267       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:20:46.649217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="9.300581ms"
	I1025 10:20:46.651426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="2.145347ms"
	I1025 10:20:46.651806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.893207ms"
	I1025 10:20:46.651910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.05µs"
	I1025 10:20:46.655131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.956µs"
	I1025 10:20:46.664570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.098µs"
	I1025 10:20:50.498676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.828µs"
	I1025 10:20:51.505940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.719µs"
	I1025 10:20:52.554878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="134.985µs"
	I1025 10:20:54.566074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.603602ms"
	I1025 10:20:54.566199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.968µs"
	I1025 10:21:10.573448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="122.271µs"
	I1025 10:21:14.212352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.049718ms"
	I1025 10:21:14.212486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.409µs"
	I1025 10:21:16.954981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.641µs"
	
	
	==> kube-proxy [02ebd7cadca0e2f2e1a8fdb2d2a4025e434b7679c4e9c3329b85521f4edff815] <==
	I1025 10:20:33.925955       1 server_others.go:69] "Using iptables proxy"
	I1025 10:20:33.954592       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:20:34.012444       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:20:34.018918       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:20:34.018969       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:20:34.018980       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:20:34.019022       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:20:34.019681       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:20:34.019991       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:34.023492       1 config.go:188] "Starting service config controller"
	I1025 10:20:34.023569       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:20:34.023680       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:20:34.023708       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:20:34.024099       1 config.go:315] "Starting node config controller"
	I1025 10:20:34.024557       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:20:34.125374       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:20:34.125592       1 shared_informer.go:318] Caches are synced for node config
	I1025 10:20:34.125617       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bbd6a05e151245b4f918254624d45abfaa66832cc221e776d8265d0e8fa29750] <==
	I1025 10:20:31.302881       1 serving.go:348] Generated self-signed cert in-memory
	W1025 10:20:33.096356       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:20:33.096397       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:20:33.096521       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:20:33.096537       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:20:33.119067       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 10:20:33.119120       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:33.126133       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:33.126302       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:20:33.135195       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 10:20:33.135563       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 10:20:33.227639       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781532     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stfj7\" (UniqueName: \"kubernetes.io/projected/633c266d-f837-432b-843f-b86244518663-kube-api-access-stfj7\") pod \"kubernetes-dashboard-8694d4445c-mshs4\" (UID: \"633c266d-f837-432b-843f-b86244518663\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4"
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781617     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/633c266d-f837-432b-843f-b86244518663-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mshs4\" (UID: \"633c266d-f837-432b-843f-b86244518663\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4"
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781657     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfg5w\" (UniqueName: \"kubernetes.io/projected/f45f31cd-c886-48b9-8f84-70ac66dac634-kube-api-access-rfg5w\") pod \"dashboard-metrics-scraper-5f989dc9cf-nbn6r\" (UID: \"f45f31cd-c886-48b9-8f84-70ac66dac634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r"
	Oct 25 10:20:46 old-k8s-version-714798 kubelet[710]: I1025 10:20:46.781784     710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f45f31cd-c886-48b9-8f84-70ac66dac634-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-nbn6r\" (UID: \"f45f31cd-c886-48b9-8f84-70ac66dac634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r"
	Oct 25 10:20:50 old-k8s-version-714798 kubelet[710]: I1025 10:20:50.484823     710 scope.go:117] "RemoveContainer" containerID="636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827"
	Oct 25 10:20:51 old-k8s-version-714798 kubelet[710]: I1025 10:20:51.489568     710 scope.go:117] "RemoveContainer" containerID="636302bfd0254fc20079b8d9fcba81822f3c418244e5d7178b98cd710a0bc827"
	Oct 25 10:20:51 old-k8s-version-714798 kubelet[710]: I1025 10:20:51.489933     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:20:51 old-k8s-version-714798 kubelet[710]: E1025 10:20:51.490349     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:20:52 old-k8s-version-714798 kubelet[710]: I1025 10:20:52.493810     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:20:52 old-k8s-version-714798 kubelet[710]: E1025 10:20:52.494212     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:20:54 old-k8s-version-714798 kubelet[710]: I1025 10:20:54.522436     710 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mshs4" podStartSLOduration=2.00098375 podCreationTimestamp="2025-10-25 10:20:46 +0000 UTC" firstStartedPulling="2025-10-25 10:20:46.968470366 +0000 UTC m=+16.696572504" lastFinishedPulling="2025-10-25 10:20:53.48979132 +0000 UTC m=+23.217893460" observedRunningTime="2025-10-25 10:20:54.521700913 +0000 UTC m=+24.249803055" watchObservedRunningTime="2025-10-25 10:20:54.522304706 +0000 UTC m=+24.250406863"
	Oct 25 10:20:56 old-k8s-version-714798 kubelet[710]: I1025 10:20:56.943488     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:20:56 old-k8s-version-714798 kubelet[710]: E1025 10:20:56.943883     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: I1025 10:21:10.377714     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: I1025 10:21:10.558258     710 scope.go:117] "RemoveContainer" containerID="3095de72ef005f8da24fe13f5c258a5435b6eb90510b5c538ac55193462ab85a"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: I1025 10:21:10.558554     710 scope.go:117] "RemoveContainer" containerID="2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	Oct 25 10:21:10 old-k8s-version-714798 kubelet[710]: E1025 10:21:10.558957     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:16 old-k8s-version-714798 kubelet[710]: I1025 10:21:16.942492     710 scope.go:117] "RemoveContainer" containerID="2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	Oct 25 10:21:16 old-k8s-version-714798 kubelet[710]: E1025 10:21:16.943025     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:28 old-k8s-version-714798 kubelet[710]: I1025 10:21:28.377693     710 scope.go:117] "RemoveContainer" containerID="2867ca1d41946eeecbfc494d499686f5ecf5f15b7090ccc842b585183da21368"
	Oct 25 10:21:28 old-k8s-version-714798 kubelet[710]: E1025 10:21:28.378140     710 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nbn6r_kubernetes-dashboard(f45f31cd-c886-48b9-8f84-70ac66dac634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nbn6r" podUID="f45f31cd-c886-48b9-8f84-70ac66dac634"
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:21:29 old-k8s-version-714798 systemd[1]: kubelet.service: Consumed 1.800s CPU time.
	
	
	==> kubernetes-dashboard [023f9eec31a026b72d57a05e021ddae34e171cf9477f9c45ccc83ccc83724ad3] <==
	2025/10/25 10:20:53 Starting overwatch
	2025/10/25 10:20:53 Using namespace: kubernetes-dashboard
	2025/10/25 10:20:53 Using in-cluster config to connect to apiserver
	2025/10/25 10:20:53 Using secret token for csrf signing
	2025/10/25 10:20:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:20:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:20:53 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:20:53 Generating JWE encryption key
	2025/10/25 10:20:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:20:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:20:53 Initializing JWE encryption key from synchronized object
	2025/10/25 10:20:53 Creating in-cluster Sidecar client
	2025/10/25 10:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:20:53 Serving insecurely on HTTP port: 9090
	2025/10/25 10:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [83298f29677812bdb89aebe27bacd5765cc414cfbcb8ae3820f968d7dfb2a0a8] <==
	I1025 10:20:34.506706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:20:34.524999       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:20:34.525072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:20:51.933825       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:20:51.933896       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"989e4d24-c526-4f80-8238-4bbd30d72adb", APIVersion:"v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-714798_d6752aae-7aea-4dfb-845c-f0d95077fb09 became leader
	I1025 10:20:51.933963       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714798_d6752aae-7aea-4dfb-845c-f0d95077fb09!
	I1025 10:20:52.034479       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714798_d6752aae-7aea-4dfb-845c-f0d95077fb09!
	
	
	==> storage-provisioner [f986363d36450aecccdaa98aebe4eb5dbc429656a6bee1770bbfde083685da0c] <==
	I1025 10:20:33.923498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:20:33.925778       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-714798 -n old-k8s-version-714798
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-714798 -n old-k8s-version-714798: exit status 2 (397.717446ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-714798 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1025 10:21:35.058368  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:35.064806  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:35.076245  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-899665 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-899665 --alsologtostderr -v=1: exit status 80 (1.694176266s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-899665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:21:52.595266  645441 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:52.595540  645441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:52.595551  645441 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:52.595556  645441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:52.595783  645441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:52.596012  645441 out.go:368] Setting JSON to false
	I1025 10:21:52.596066  645441 mustload.go:65] Loading cluster: no-preload-899665
	I1025 10:21:52.596442  645441 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:52.596934  645441 cli_runner.go:164] Run: docker container inspect no-preload-899665 --format={{.State.Status}}
	I1025 10:21:52.617305  645441 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:21:52.617669  645441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:52.685882  645441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 10:21:52.672075392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:52.686685  645441 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-899665 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:21:52.689476  645441 out.go:179] * Pausing node no-preload-899665 ... 
	I1025 10:21:52.690692  645441 host.go:66] Checking if "no-preload-899665" exists ...
	I1025 10:21:52.690981  645441 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:52.691025  645441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-899665
	I1025 10:21:52.710269  645441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/no-preload-899665/id_rsa Username:docker}
	I1025 10:21:52.816549  645441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:52.849291  645441 pause.go:52] kubelet running: true
	I1025 10:21:52.849410  645441 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:53.047582  645441 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:53.047770  645441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:53.123355  645441 cri.go:89] found id: "e435fa14f2cceba2eb3f8f15eb6412ef2454dbc3812f08964c402cf1e6522851"
	I1025 10:21:53.123381  645441 cri.go:89] found id: "22cccd3b8325d38064ff3cf5dec75ac34e8ea0682f221af167776ca55146f3d7"
	I1025 10:21:53.123386  645441 cri.go:89] found id: "7aa07387b3dadb428f650a505ba419b3a80a74e2038ef9adb6684c94298a0ca5"
	I1025 10:21:53.123395  645441 cri.go:89] found id: "6c060dfbf2e501de983eb8ec105f8a398270827cd89f6a0aa1efc2893da367a6"
	I1025 10:21:53.123399  645441 cri.go:89] found id: "059ea673d4650d6e7e9628b8a7cf58c09fb38646edaba28e0ed69edba66a5ad8"
	I1025 10:21:53.123403  645441 cri.go:89] found id: "5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964"
	I1025 10:21:53.123406  645441 cri.go:89] found id: "352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f"
	I1025 10:21:53.123410  645441 cri.go:89] found id: "b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a"
	I1025 10:21:53.123414  645441 cri.go:89] found id: "f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793"
	I1025 10:21:53.123424  645441 cri.go:89] found id: "8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	I1025 10:21:53.123428  645441 cri.go:89] found id: "6dcccb2cdcdf4276c8b975282d608c7438084301444b6d594bdeb6eb819546b9"
	I1025 10:21:53.123450  645441 cri.go:89] found id: ""
	I1025 10:21:53.123506  645441 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:53.137150  645441 retry.go:31] will retry after 309.083866ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:53Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:53.446694  645441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:53.462115  645441 pause.go:52] kubelet running: false
	I1025 10:21:53.462172  645441 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:53.606837  645441 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:53.606946  645441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:53.681125  645441 cri.go:89] found id: "e435fa14f2cceba2eb3f8f15eb6412ef2454dbc3812f08964c402cf1e6522851"
	I1025 10:21:53.681153  645441 cri.go:89] found id: "22cccd3b8325d38064ff3cf5dec75ac34e8ea0682f221af167776ca55146f3d7"
	I1025 10:21:53.681158  645441 cri.go:89] found id: "7aa07387b3dadb428f650a505ba419b3a80a74e2038ef9adb6684c94298a0ca5"
	I1025 10:21:53.681161  645441 cri.go:89] found id: "6c060dfbf2e501de983eb8ec105f8a398270827cd89f6a0aa1efc2893da367a6"
	I1025 10:21:53.681164  645441 cri.go:89] found id: "059ea673d4650d6e7e9628b8a7cf58c09fb38646edaba28e0ed69edba66a5ad8"
	I1025 10:21:53.681168  645441 cri.go:89] found id: "5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964"
	I1025 10:21:53.681170  645441 cri.go:89] found id: "352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f"
	I1025 10:21:53.681173  645441 cri.go:89] found id: "b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a"
	I1025 10:21:53.681175  645441 cri.go:89] found id: "f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793"
	I1025 10:21:53.681187  645441 cri.go:89] found id: "8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	I1025 10:21:53.681190  645441 cri.go:89] found id: "6dcccb2cdcdf4276c8b975282d608c7438084301444b6d594bdeb6eb819546b9"
	I1025 10:21:53.681193  645441 cri.go:89] found id: ""
	I1025 10:21:53.681235  645441 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:53.695546  645441 retry.go:31] will retry after 248.90294ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:53Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:53.945075  645441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:53.960031  645441 pause.go:52] kubelet running: false
	I1025 10:21:53.960099  645441 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:21:54.110045  645441 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:21:54.110140  645441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:21:54.188821  645441 cri.go:89] found id: "e435fa14f2cceba2eb3f8f15eb6412ef2454dbc3812f08964c402cf1e6522851"
	I1025 10:21:54.188862  645441 cri.go:89] found id: "22cccd3b8325d38064ff3cf5dec75ac34e8ea0682f221af167776ca55146f3d7"
	I1025 10:21:54.188870  645441 cri.go:89] found id: "7aa07387b3dadb428f650a505ba419b3a80a74e2038ef9adb6684c94298a0ca5"
	I1025 10:21:54.188874  645441 cri.go:89] found id: "6c060dfbf2e501de983eb8ec105f8a398270827cd89f6a0aa1efc2893da367a6"
	I1025 10:21:54.188877  645441 cri.go:89] found id: "059ea673d4650d6e7e9628b8a7cf58c09fb38646edaba28e0ed69edba66a5ad8"
	I1025 10:21:54.188882  645441 cri.go:89] found id: "5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964"
	I1025 10:21:54.188885  645441 cri.go:89] found id: "352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f"
	I1025 10:21:54.188887  645441 cri.go:89] found id: "b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a"
	I1025 10:21:54.188890  645441 cri.go:89] found id: "f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793"
	I1025 10:21:54.188900  645441 cri.go:89] found id: "8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	I1025 10:21:54.188903  645441 cri.go:89] found id: "6dcccb2cdcdf4276c8b975282d608c7438084301444b6d594bdeb6eb819546b9"
	I1025 10:21:54.188906  645441 cri.go:89] found id: ""
	I1025 10:21:54.188951  645441 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:21:54.204397  645441 out.go:203] 
	W1025 10:21:54.205948  645441 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:21:54.205971  645441 out.go:285] * 
	* 
	W1025 10:21:54.210309  645441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:54.211833  645441 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-899665 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-899665
helpers_test.go:243: (dbg) docker inspect no-preload-899665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192",
	        "Created": "2025-10-25T10:19:22.595874496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 631836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:49.225910051Z",
	            "FinishedAt": "2025-10-25T10:20:47.814484127Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/hostname",
	        "HostsPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/hosts",
	        "LogPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192-json.log",
	        "Name": "/no-preload-899665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-899665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-899665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192",
	                "LowerDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-899665",
	                "Source": "/var/lib/docker/volumes/no-preload-899665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-899665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-899665",
	                "name.minikube.sigs.k8s.io": "no-preload-899665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dcf80ce3569fcb39d59eab6b6cb6a86db49ea084b7a707e96d1bb72fcf2d633",
	            "SandboxKey": "/var/run/docker/netns/0dcf80ce3569",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-899665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:b8:35:85:1c:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c8aca1f62a354ce1975d9d9ac93fc72b53c6dd0c4c9ae45ab02ef47d3a0fdf93",
	                    "EndpointID": "b6825e60c438126a3252881fbf02da758c698582c166bdadcab5e100b71e9e2b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-899665",
	                        "695e74f3d798"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665: exit status 2 (364.056867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-899665 logs -n 25
E1025 10:21:55.554424  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-899665 logs -n 25: (1.241613206s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                                                                                                    │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 10:21:29.331743  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:31.831906  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:30.254936  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:32.256411  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.665150  638584 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:21:36.665238  638584 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:21:36.665366  638584 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:21:36.665424  638584 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:21:36.665455  638584 kubeadm.go:318] OS: Linux
	I1025 10:21:36.665528  638584 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:21:36.665640  638584 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:21:36.665711  638584 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:21:36.665755  638584 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:21:36.665836  638584 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:21:36.665906  638584 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:21:36.665989  638584 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:21:36.666061  638584 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:21:36.666164  638584 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:21:36.666287  638584 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:21:36.666443  638584 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:21:36.666505  638584 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:21:36.668101  638584 out.go:252]   - Generating certificates and keys ...
	I1025 10:21:36.668178  638584 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:21:36.668239  638584 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:21:36.668297  638584 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:21:36.668408  638584 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:21:36.668487  638584 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:21:36.668570  638584 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:21:36.668632  638584 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:21:36.669282  638584 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669368  638584 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:21:36.669522  638584 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669602  638584 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:21:36.669681  638584 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:21:36.669732  638584 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:21:36.669795  638584 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:21:36.669856  638584 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:21:36.669922  638584 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:21:36.669975  638584 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:21:36.670054  638584 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:21:36.670110  638584 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:21:36.670198  638584 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:21:36.670268  638584 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:21:36.673336  638584 out.go:252]   - Booting up control plane ...
	I1025 10:21:36.673471  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:21:36.673585  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:21:36.673666  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:21:36.673811  638584 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:21:36.673918  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:21:36.674052  638584 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:21:36.674150  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:21:36.674197  638584 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:21:36.674448  638584 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:21:36.674610  638584 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:21:36.674735  638584 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.921842ms
	I1025 10:21:36.674869  638584 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:21:36.674985  638584 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:21:36.675113  638584 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:21:36.675225  638584 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:21:36.675373  638584 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848539291s
	I1025 10:21:36.675485  638584 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.099917517s
	I1025 10:21:36.675576  638584 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501482903s
	I1025 10:21:36.675749  638584 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:21:36.675902  638584 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:21:36.675992  638584 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:21:36.676186  638584 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-683681 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:21:36.676270  638584 kubeadm.go:318] [bootstrap-token] Using token: gh3e3n.vi8ppuvnf3ix9l58
	I1025 10:21:36.678455  638584 out.go:252]   - Configuring RBAC rules ...
	I1025 10:21:36.678655  638584 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:21:36.678741  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:21:36.678915  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:21:36.679094  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:21:36.679206  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:21:36.679286  638584 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:21:36.679483  638584 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:21:36.679551  638584 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:21:36.679620  638584 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:21:36.679632  638584 kubeadm.go:318] 
	I1025 10:21:36.679721  638584 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:21:36.679732  638584 kubeadm.go:318] 
	I1025 10:21:36.679835  638584 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:21:36.679845  638584 kubeadm.go:318] 
	I1025 10:21:36.679882  638584 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:21:36.679977  638584 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:21:36.680061  638584 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:21:36.680070  638584 kubeadm.go:318] 
	I1025 10:21:36.680154  638584 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:21:36.680170  638584 kubeadm.go:318] 
	I1025 10:21:36.680221  638584 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:21:36.680229  638584 kubeadm.go:318] 
	I1025 10:21:36.680289  638584 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:21:36.680387  638584 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:21:36.680463  638584 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:21:36.680471  638584 kubeadm.go:318] 
	I1025 10:21:36.680563  638584 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:21:36.680661  638584 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:21:36.680670  638584 kubeadm.go:318] 
	I1025 10:21:36.680776  638584 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.680932  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:21:36.680959  638584 kubeadm.go:318] 	--control-plane 
	I1025 10:21:36.680967  638584 kubeadm.go:318] 
	I1025 10:21:36.681062  638584 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:21:36.681073  638584 kubeadm.go:318] 
	I1025 10:21:36.681190  638584 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.681350  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:21:36.681383  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:36.681402  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:36.685048  638584 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:21:34.332728  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:36.832195  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:34.756305  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:37.255124  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.686372  638584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:21:36.691990  638584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:21:36.692012  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:21:36.711248  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:21:36.950001  638584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:21:36.950063  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:36.950140  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-683681 minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=embed-certs-683681 minikube.k8s.io/primary=true
	I1025 10:21:36.962716  638584 ops.go:34] apiserver oom_adj: -16
	I1025 10:21:37.040626  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:37.541457  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.041452  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.541265  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.041583  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.541553  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:40.041803  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.330926  631515 pod_ready.go:94] pod "coredns-66bc5c9577-gtnvx" is "Ready"
	I1025 10:21:39.330956  631515 pod_ready.go:86] duration metric: took 38.506063732s for pod "coredns-66bc5c9577-gtnvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.333923  631515 pod_ready.go:83] waiting for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.338091  631515 pod_ready.go:94] pod "etcd-no-preload-899665" is "Ready"
	I1025 10:21:39.338119  631515 pod_ready.go:86] duration metric: took 4.169551ms for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.340510  631515 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.344782  631515 pod_ready.go:94] pod "kube-apiserver-no-preload-899665" is "Ready"
	I1025 10:21:39.344808  631515 pod_ready.go:86] duration metric: took 4.267435ms for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.346928  631515 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.527867  631515 pod_ready.go:94] pod "kube-controller-manager-no-preload-899665" is "Ready"
	I1025 10:21:39.527898  631515 pod_ready.go:86] duration metric: took 180.948376ms for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.728099  631515 pod_ready.go:83] waiting for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.129442  631515 pod_ready.go:94] pod "kube-proxy-fdthr" is "Ready"
	I1025 10:21:40.129471  631515 pod_ready.go:86] duration metric: took 401.343438ms for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.329196  631515 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728428  631515 pod_ready.go:94] pod "kube-scheduler-no-preload-899665" is "Ready"
	I1025 10:21:40.728461  631515 pod_ready.go:86] duration metric: took 399.238728ms for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728477  631515 pod_ready.go:40] duration metric: took 39.908384057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:40.776763  631515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:40.778765  631515 out.go:179] * Done! kubectl is now configured to use "no-preload-899665" cluster and "default" namespace by default
	I1025 10:21:40.541552  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.041202  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.540928  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.626698  638584 kubeadm.go:1113] duration metric: took 4.676682024s to wait for elevateKubeSystemPrivileges
	I1025 10:21:41.626740  638584 kubeadm.go:402] duration metric: took 15.779813606s to StartCluster
	I1025 10:21:41.626763  638584 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.626844  638584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:41.628485  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.628738  638584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:41.628758  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:21:41.628815  638584 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:41.628922  638584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:21:41.628947  638584 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:21:41.628984  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.628970  638584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:21:41.629014  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:41.629033  638584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:21:41.629466  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.629530  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.632478  638584 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:41.635235  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:41.654284  638584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:41.655720  638584 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	I1025 10:21:41.655762  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.656106  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.656203  638584 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.656228  638584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:41.656290  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.679823  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.684242  638584 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.684268  638584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:41.684345  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.712034  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.726056  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:21:41.804301  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.809475  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:41.831472  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.912561  638584 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 10:21:42.139096  638584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:42.145509  638584 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1025 10:21:39.755018  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:41.756413  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:42.146900  638584 addons.go:514] duration metric: took 518.085843ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:21:42.416647  638584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-683681" context rescaled to 1 replicas
	W1025 10:21:44.142621  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:44.256001  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:46.755543  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:47.755253  636484 pod_ready.go:94] pod "coredns-66bc5c9577-rznxv" is "Ready"
	I1025 10:21:47.755285  636484 pod_ready.go:86] duration metric: took 31.006445495s for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.758305  636484 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.763202  636484 pod_ready.go:94] pod "etcd-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.763230  636484 pod_ready.go:86] duration metric: took 4.871359ms for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.765533  636484 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.769981  636484 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.770085  636484 pod_ready.go:86] duration metric: took 4.518205ms for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.772484  636484 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.952605  636484 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.952636  636484 pod_ready.go:86] duration metric: took 180.129601ms for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.153608  636484 pod_ready.go:83] waiting for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.552560  636484 pod_ready.go:94] pod "kube-proxy-cvm5c" is "Ready"
	I1025 10:21:48.552591  636484 pod_ready.go:86] duration metric: took 398.954024ms for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.753044  636484 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152785  636484 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:49.152816  636484 pod_ready.go:86] duration metric: took 399.744601ms for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152828  636484 pod_ready.go:40] duration metric: took 32.410721068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:49.201278  636484 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:49.203247  636484 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-767846" cluster and "default" namespace by default
	W1025 10:21:46.143197  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:48.642439  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 10:21:10 no-preload-899665 crio[560]: time="2025-10-25T10:21:10.327883138Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:10 no-preload-899665 crio[560]: time="2025-10-25T10:21:10.529778478Z" level=info msg="Removing container: 99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750" id=bd4e7e36-1401-470c-bd3e-d9a0d24141d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:10 no-preload-899665 crio[560]: time="2025-10-25T10:21:10.54060566Z" level=info msg="Removed container 99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=bd4e7e36-1401-470c-bd3e-d9a0d24141d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.294152639Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f21de032-549d-49b4-b27d-197453a80201 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.297394337Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b6e52d1-99a8-4d44-bf4c-f939314c614b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.301211088Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b2167c94-a25b-49c6-b4e5-00385a084fb2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.301476354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.309178056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.309745542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.347517343Z" level=info msg="Created container 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b2167c94-a25b-49c6-b4e5-00385a084fb2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.348262977Z" level=info msg="Starting container: 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb" id=27e6b0eb-5e5a-4ec9-93b1-8729278f4b47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.350690333Z" level=info msg="Started container" PID=1741 containerID=1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper id=27e6b0eb-5e5a-4ec9-93b1-8729278f4b47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37572d825b67e73f53a655283b972712e6ae4e28f13f80347070ddc4faf94677
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.564548027Z" level=info msg="Removing container: 9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7" id=7ea8b6ca-28a7-4530-9a66-8478a92e31ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.578649748Z" level=info msg="Removed container 9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=7ea8b6ca-28a7-4530-9a66-8478a92e31ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.42774688Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6d5073b4-d37f-476a-883c-896562180d28 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.428841492Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7ccf43fb-0f4f-4231-b881-1d87b0531b8b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.430296013Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b3d8d9fc-a54f-4b4d-8cf0-8baf9376fd28 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.430479054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.436184347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.436719415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.473714177Z" level=info msg="Created container 8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b3d8d9fc-a54f-4b4d-8cf0-8baf9376fd28 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.47442335Z" level=info msg="Starting container: 8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6" id=f2210c78-b906-41b9-a5a4-992938f38e75 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.476853344Z" level=info msg="Started container" PID=1773 containerID=8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper id=f2210c78-b906-41b9-a5a4-992938f38e75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37572d825b67e73f53a655283b972712e6ae4e28f13f80347070ddc4faf94677
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.639251067Z" level=info msg="Removing container: 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb" id=820f914a-612d-4f87-bfe7-55355ff8e9f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.65243796Z" level=info msg="Removed container 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=820f914a-612d-4f87-bfe7-55355ff8e9f6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8cfca56338f81       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   37572d825b67e       dashboard-metrics-scraper-6ffb444bf9-8krs9   kubernetes-dashboard
	6dcccb2cdcdf4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   5a24a3c930837       kubernetes-dashboard-855c9754f9-6zv5c        kubernetes-dashboard
	e435fa14f2cce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Running             storage-provisioner         1                   c567cda8d1f34       storage-provisioner                          kube-system
	22cccd3b8325d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   7d0e0eb7eb5f5       coredns-66bc5c9577-gtnvx                     kube-system
	21c1a2e862038       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   66d7c45959f2b       busybox                                      default
	7aa07387b3dad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   fa1a5ad8c2df9       kindnet-sjskf                                kube-system
	6c060dfbf2e50       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   c567cda8d1f34       storage-provisioner                          kube-system
	059ea673d4650       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   4ab1bdc0f77e3       kube-proxy-fdthr                             kube-system
	5120b28e61a32       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   674bba36c7a2f       kube-apiserver-no-preload-899665             kube-system
	352d3fd34e0c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   99f80224a8fc9       kube-scheduler-no-preload-899665             kube-system
	b199511be2bb2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   75103cdc9d767       kube-controller-manager-no-preload-899665    kube-system
	f94925c7a0544       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   77db653658186       etcd-no-preload-899665                       kube-system
	
	
	==> coredns [22cccd3b8325d38064ff3cf5dec75ac34e8ea0682f221af167776ca55146f3d7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49658 - 59182 "HINFO IN 1381080871278460682.8429651544703439985. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070735667s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-899665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-899665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=no-preload-899665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_19_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:19:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-899665
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-899665
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9552a4c0-ffdc-4517-8db3-fa4623099c2a
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-gtnvx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-899665                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-sjskf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-899665              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-899665     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-fdthr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-899665              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8krs9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6zv5c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 111s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           113s                 node-controller  Node no-preload-899665 event: Registered Node no-preload-899665 in Controller
	  Normal  NodeReady                98s                  kubelet          Node no-preload-899665 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                  node-controller  Node no-preload-899665 event: Registered Node no-preload-899665 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793] <==
	{"level":"warn","ts":"2025-10-25T10:20:58.312532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.322086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.333470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.346205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.352028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.360869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.369766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.378975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.387828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.403425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.409905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.417677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.425593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.433890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.442014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.450749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.458481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.466370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.474353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.484491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.499778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.508613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.517579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.577225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34978","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:21:08.820530Z","caller":"traceutil/trace.go:172","msg":"trace[196694558] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"213.714466ms","start":"2025-10-25T10:21:08.606793Z","end":"2025-10-25T10:21:08.820508Z","steps":["trace[196694558] 'process raft request'  (duration: 126.577035ms)","trace[196694558] 'compare'  (duration: 87.039951ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:21:55 up  2:04,  0 user,  load average: 5.48, 5.16, 5.97
	Linux no-preload-899665 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7aa07387b3dadb428f650a505ba419b3a80a74e2038ef9adb6684c94298a0ca5] <==
	I1025 10:21:00.000938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:21:00.001205       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:21:00.001408       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:21:00.001431       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:21:00.001471       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:21:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:21:00.302283       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:21:00.302372       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:21:00.302386       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:21:00.400112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:21:00.602603       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:21:00.602654       1 metrics.go:72] Registering metrics
	I1025 10:21:00.602733       1 controller.go:711] "Syncing nftables rules"
	I1025 10:21:10.300482       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:10.300559       1 main.go:301] handling current node
	I1025 10:21:20.307439       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:20.307474       1 main.go:301] handling current node
	I1025 10:21:30.300470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:30.300545       1 main.go:301] handling current node
	I1025 10:21:40.300734       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:40.300777       1 main.go:301] handling current node
	I1025 10:21:50.302346       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:50.302406       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964] <==
	I1025 10:20:59.148185       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:59.148192       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:59.148986       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:20:59.149249       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:20:59.149308       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:20:59.149375       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:20:59.151405       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:20:59.156394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:20:59.157981       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:20:59.158071       1 policy_source.go:240] refreshing policies
	E1025 10:20:59.162930       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:20:59.204115       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:59.242201       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:59.434239       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:59.532099       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:20:59.566674       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:59.588213       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:59.606237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:59.664347       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.116.208"}
	I1025 10:20:59.676119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.147.92"}
	I1025 10:21:00.052893       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:21:02.070551       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:02.368240       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:02.368240       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:02.468459       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a] <==
	I1025 10:21:01.913902       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-899665"
	I1025 10:21:01.913962       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:21:01.915109       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:21:01.915117       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:21:01.915262       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:21:01.915491       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:21:01.915508       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:21:01.915527       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:21:01.915661       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:21:01.915732       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:21:01.915703       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:21:01.915943       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:21:01.917366       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:21:01.922002       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:21:01.922024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:21:01.922034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:21:01.922000       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:21:01.923134       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:01.924259       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:21:01.925488       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:21:01.926802       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:21:01.938148       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:21:01.939384       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:21:01.939400       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:21:01.944799       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [059ea673d4650d6e7e9628b8a7cf58c09fb38646edaba28e0ed69edba66a5ad8] <==
	I1025 10:20:59.819138       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:59.895465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:59.996550       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:59.996601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:20:59.996682       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:21:00.017955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:21:00.018043       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:21:00.024614       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:21:00.025076       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:21:00.025110       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:00.026965       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:21:00.026988       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:21:00.027108       1 config.go:309] "Starting node config controller"
	I1025 10:21:00.027116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:21:00.027382       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:21:00.027419       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:21:00.027832       1 config.go:200] "Starting service config controller"
	I1025 10:21:00.028488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:21:00.127192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:21:00.127212       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:21:00.127789       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:21:00.128918       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f] <==
	I1025 10:20:57.507887       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:20:59.094808       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:20:59.094932       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:20:59.094949       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:20:59.094966       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:20:59.155691       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:20:59.155735       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:59.159110       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:59.159205       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:59.159209       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:20:59.159057       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:20:59.260410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:21:02 no-preload-899665 kubelet[703]: I1025 10:21:02.551781     703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1c1c50ff-70c9-457a-a5e5-dd294a77f730-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6zv5c\" (UID: \"1c1c50ff-70c9-457a-a5e5-dd294a77f730\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6zv5c"
	Oct 25 10:21:07 no-preload-899665 kubelet[703]: I1025 10:21:07.695408     703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6zv5c" podStartSLOduration=1.714452143 podStartE2EDuration="5.695383288s" podCreationTimestamp="2025-10-25 10:21:02 +0000 UTC" firstStartedPulling="2025-10-25 10:21:02.772614366 +0000 UTC m=+6.529933892" lastFinishedPulling="2025-10-25 10:21:06.753545511 +0000 UTC m=+10.510865037" observedRunningTime="2025-10-25 10:21:07.531669078 +0000 UTC m=+11.288988611" watchObservedRunningTime="2025-10-25 10:21:07.695383288 +0000 UTC m=+11.452702823"
	Oct 25 10:21:08 no-preload-899665 kubelet[703]: I1025 10:21:08.963071     703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:21:09 no-preload-899665 kubelet[703]: I1025 10:21:09.521743     703 scope.go:117] "RemoveContainer" containerID="99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750"
	Oct 25 10:21:10 no-preload-899665 kubelet[703]: I1025 10:21:10.527493     703 scope.go:117] "RemoveContainer" containerID="99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750"
	Oct 25 10:21:10 no-preload-899665 kubelet[703]: I1025 10:21:10.527538     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:10 no-preload-899665 kubelet[703]: E1025 10:21:10.527696     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:11 no-preload-899665 kubelet[703]: I1025 10:21:11.532586     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:11 no-preload-899665 kubelet[703]: E1025 10:21:11.532830     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: I1025 10:21:20.293508     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: I1025 10:21:20.562596     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: I1025 10:21:20.562827     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: E1025 10:21:20.563037     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:30 no-preload-899665 kubelet[703]: I1025 10:21:30.293426     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:30 no-preload-899665 kubelet[703]: E1025 10:21:30.293636     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: I1025 10:21:45.427135     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: I1025 10:21:45.637917     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: I1025 10:21:45.638147     703 scope.go:117] "RemoveContainer" containerID="8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: E1025 10:21:45.638400     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:50 no-preload-899665 kubelet[703]: I1025 10:21:50.293855     703 scope.go:117] "RemoveContainer" containerID="8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	Oct 25 10:21:50 no-preload-899665 kubelet[703]: E1025 10:21:50.294042     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:53 no-preload-899665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:21:53 no-preload-899665 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:21:53 no-preload-899665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:21:53 no-preload-899665 systemd[1]: kubelet.service: Consumed 2.022s CPU time.
	
	
	==> kubernetes-dashboard [6dcccb2cdcdf4276c8b975282d608c7438084301444b6d594bdeb6eb819546b9] <==
	2025/10/25 10:21:06 Using namespace: kubernetes-dashboard
	2025/10/25 10:21:06 Using in-cluster config to connect to apiserver
	2025/10/25 10:21:06 Using secret token for csrf signing
	2025/10/25 10:21:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:21:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:21:06 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:21:06 Generating JWE encryption key
	2025/10/25 10:21:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:21:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:21:07 Initializing JWE encryption key from synchronized object
	2025/10/25 10:21:07 Creating in-cluster Sidecar client
	2025/10/25 10:21:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:21:07 Serving insecurely on HTTP port: 9090
	2025/10/25 10:21:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:21:06 Starting overwatch
	
	
	==> storage-provisioner [6c060dfbf2e501de983eb8ec105f8a398270827cd89f6a0aa1efc2893da367a6] <==
	I1025 10:20:59.773876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:20:59.777696       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e435fa14f2cceba2eb3f8f15eb6412ef2454dbc3812f08964c402cf1e6522851] <==
	W1025 10:21:30.029843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:32.034332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:32.041353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:34.045263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:34.050461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:36.054145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:36.058745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:38.061994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:38.075559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:40.079232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:40.084736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:42.089007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:42.094080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.097107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.102291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.105426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.109573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.113299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.118540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.122468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.126560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.130681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.136425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.140217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.144383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-899665 -n no-preload-899665
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-899665 -n no-preload-899665: exit status 2 (366.691101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-899665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-899665
helpers_test.go:243: (dbg) docker inspect no-preload-899665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192",
	        "Created": "2025-10-25T10:19:22.595874496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 631836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:20:49.225910051Z",
	            "FinishedAt": "2025-10-25T10:20:47.814484127Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/hostname",
	        "HostsPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/hosts",
	        "LogPath": "/var/lib/docker/containers/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192/695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192-json.log",
	        "Name": "/no-preload-899665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-899665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-899665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "695e74f3d798876c79ec9dbb6df46faeaba6433cb664a7ed6875cdc91796b192",
	                "LowerDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b682c6b2402b5b71231c37bbc02e0297cfeac2f648531c88d56a37d472a144a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-899665",
	                "Source": "/var/lib/docker/volumes/no-preload-899665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-899665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-899665",
	                "name.minikube.sigs.k8s.io": "no-preload-899665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dcf80ce3569fcb39d59eab6b6cb6a86db49ea084b7a707e96d1bb72fcf2d633",
	            "SandboxKey": "/var/run/docker/netns/0dcf80ce3569",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-899665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:b8:35:85:1c:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c8aca1f62a354ce1975d9d9ac93fc72b53c6dd0c4c9ae45ab02ef47d3a0fdf93",
	                    "EndpointID": "b6825e60c438126a3252881fbf02da758c698582c166bdadcab5e100b71e9e2b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-899665",
	                        "695e74f3d798"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665: exit status 2 (357.717118ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-899665 logs -n 25
E1025 10:21:56.768692  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kindnet-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-899665 logs -n 25: (1.226041025s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable metrics-server -p no-preload-899665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p no-preload-899665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-667966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                                                                                                    │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 10:21:29.331743  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:31.831906  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:30.254936  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:32.256411  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.665150  638584 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:21:36.665238  638584 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:21:36.665366  638584 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:21:36.665424  638584 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:21:36.665455  638584 kubeadm.go:318] OS: Linux
	I1025 10:21:36.665528  638584 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:21:36.665640  638584 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:21:36.665711  638584 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:21:36.665755  638584 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:21:36.665836  638584 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:21:36.665906  638584 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:21:36.665989  638584 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:21:36.666061  638584 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:21:36.666164  638584 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:21:36.666287  638584 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:21:36.666443  638584 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:21:36.666505  638584 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:21:36.668101  638584 out.go:252]   - Generating certificates and keys ...
	I1025 10:21:36.668178  638584 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:21:36.668239  638584 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:21:36.668297  638584 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:21:36.668408  638584 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:21:36.668487  638584 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:21:36.668570  638584 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:21:36.668632  638584 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:21:36.669282  638584 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669368  638584 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:21:36.669522  638584 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669602  638584 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:21:36.669681  638584 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:21:36.669732  638584 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:21:36.669795  638584 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:21:36.669856  638584 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:21:36.669922  638584 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:21:36.669975  638584 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:21:36.670054  638584 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:21:36.670110  638584 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:21:36.670198  638584 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:21:36.670268  638584 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:21:36.673336  638584 out.go:252]   - Booting up control plane ...
	I1025 10:21:36.673471  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:21:36.673585  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:21:36.673666  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:21:36.673811  638584 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:21:36.673918  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:21:36.674052  638584 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:21:36.674150  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:21:36.674197  638584 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:21:36.674448  638584 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:21:36.674610  638584 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:21:36.674735  638584 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.921842ms
	I1025 10:21:36.674869  638584 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:21:36.674985  638584 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:21:36.675113  638584 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:21:36.675225  638584 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:21:36.675373  638584 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848539291s
	I1025 10:21:36.675485  638584 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.099917517s
	I1025 10:21:36.675576  638584 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501482903s
	I1025 10:21:36.675749  638584 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:21:36.675902  638584 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:21:36.675992  638584 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:21:36.676186  638584 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-683681 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:21:36.676270  638584 kubeadm.go:318] [bootstrap-token] Using token: gh3e3n.vi8ppuvnf3ix9l58
	I1025 10:21:36.678455  638584 out.go:252]   - Configuring RBAC rules ...
	I1025 10:21:36.678655  638584 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:21:36.678741  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:21:36.678915  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:21:36.679094  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:21:36.679206  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:21:36.679286  638584 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:21:36.679483  638584 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:21:36.679551  638584 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:21:36.679620  638584 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:21:36.679632  638584 kubeadm.go:318] 
	I1025 10:21:36.679721  638584 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:21:36.679732  638584 kubeadm.go:318] 
	I1025 10:21:36.679835  638584 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:21:36.679845  638584 kubeadm.go:318] 
	I1025 10:21:36.679882  638584 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:21:36.679977  638584 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:21:36.680061  638584 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:21:36.680070  638584 kubeadm.go:318] 
	I1025 10:21:36.680154  638584 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:21:36.680170  638584 kubeadm.go:318] 
	I1025 10:21:36.680221  638584 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:21:36.680229  638584 kubeadm.go:318] 
	I1025 10:21:36.680289  638584 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:21:36.680387  638584 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:21:36.680463  638584 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:21:36.680471  638584 kubeadm.go:318] 
	I1025 10:21:36.680563  638584 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:21:36.680661  638584 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:21:36.680670  638584 kubeadm.go:318] 
	I1025 10:21:36.680776  638584 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.680932  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:21:36.680959  638584 kubeadm.go:318] 	--control-plane 
	I1025 10:21:36.680967  638584 kubeadm.go:318] 
	I1025 10:21:36.681062  638584 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:21:36.681073  638584 kubeadm.go:318] 
	I1025 10:21:36.681190  638584 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.681350  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:21:36.681383  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:36.681402  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:36.685048  638584 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:21:34.332728  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:36.832195  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:34.756305  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:37.255124  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.686372  638584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:21:36.691990  638584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:21:36.692012  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:21:36.711248  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:21:36.950001  638584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:21:36.950063  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:36.950140  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-683681 minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=embed-certs-683681 minikube.k8s.io/primary=true
	I1025 10:21:36.962716  638584 ops.go:34] apiserver oom_adj: -16
	I1025 10:21:37.040626  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:37.541457  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.041452  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.541265  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.041583  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.541553  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:40.041803  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.330926  631515 pod_ready.go:94] pod "coredns-66bc5c9577-gtnvx" is "Ready"
	I1025 10:21:39.330956  631515 pod_ready.go:86] duration metric: took 38.506063732s for pod "coredns-66bc5c9577-gtnvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.333923  631515 pod_ready.go:83] waiting for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.338091  631515 pod_ready.go:94] pod "etcd-no-preload-899665" is "Ready"
	I1025 10:21:39.338119  631515 pod_ready.go:86] duration metric: took 4.169551ms for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.340510  631515 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.344782  631515 pod_ready.go:94] pod "kube-apiserver-no-preload-899665" is "Ready"
	I1025 10:21:39.344808  631515 pod_ready.go:86] duration metric: took 4.267435ms for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.346928  631515 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.527867  631515 pod_ready.go:94] pod "kube-controller-manager-no-preload-899665" is "Ready"
	I1025 10:21:39.527898  631515 pod_ready.go:86] duration metric: took 180.948376ms for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.728099  631515 pod_ready.go:83] waiting for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.129442  631515 pod_ready.go:94] pod "kube-proxy-fdthr" is "Ready"
	I1025 10:21:40.129471  631515 pod_ready.go:86] duration metric: took 401.343438ms for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.329196  631515 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728428  631515 pod_ready.go:94] pod "kube-scheduler-no-preload-899665" is "Ready"
	I1025 10:21:40.728461  631515 pod_ready.go:86] duration metric: took 399.238728ms for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728477  631515 pod_ready.go:40] duration metric: took 39.908384057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:40.776763  631515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:40.778765  631515 out.go:179] * Done! kubectl is now configured to use "no-preload-899665" cluster and "default" namespace by default
	I1025 10:21:40.541552  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.041202  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.540928  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.626698  638584 kubeadm.go:1113] duration metric: took 4.676682024s to wait for elevateKubeSystemPrivileges
	I1025 10:21:41.626740  638584 kubeadm.go:402] duration metric: took 15.779813606s to StartCluster
	I1025 10:21:41.626763  638584 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.626844  638584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:41.628485  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.628738  638584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:41.628758  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:21:41.628815  638584 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:41.628922  638584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:21:41.628947  638584 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:21:41.628984  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.628970  638584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:21:41.629014  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:41.629033  638584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:21:41.629466  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.629530  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.632478  638584 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:41.635235  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:41.654284  638584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:41.655720  638584 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	I1025 10:21:41.655762  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.656106  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.656203  638584 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.656228  638584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:41.656290  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.679823  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.684242  638584 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.684268  638584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:41.684345  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.712034  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.726056  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:21:41.804301  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.809475  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:41.831472  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.912561  638584 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 10:21:42.139096  638584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:42.145509  638584 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1025 10:21:39.755018  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:41.756413  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:42.146900  638584 addons.go:514] duration metric: took 518.085843ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:21:42.416647  638584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-683681" context rescaled to 1 replicas
	W1025 10:21:44.142621  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:44.256001  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:46.755543  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:47.755253  636484 pod_ready.go:94] pod "coredns-66bc5c9577-rznxv" is "Ready"
	I1025 10:21:47.755285  636484 pod_ready.go:86] duration metric: took 31.006445495s for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.758305  636484 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.763202  636484 pod_ready.go:94] pod "etcd-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.763230  636484 pod_ready.go:86] duration metric: took 4.871359ms for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.765533  636484 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.769981  636484 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.770085  636484 pod_ready.go:86] duration metric: took 4.518205ms for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.772484  636484 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.952605  636484 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.952636  636484 pod_ready.go:86] duration metric: took 180.129601ms for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.153608  636484 pod_ready.go:83] waiting for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.552560  636484 pod_ready.go:94] pod "kube-proxy-cvm5c" is "Ready"
	I1025 10:21:48.552591  636484 pod_ready.go:86] duration metric: took 398.954024ms for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.753044  636484 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152785  636484 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:49.152816  636484 pod_ready.go:86] duration metric: took 399.744601ms for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152828  636484 pod_ready.go:40] duration metric: took 32.410721068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:49.201278  636484 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:49.203247  636484 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-767846" cluster and "default" namespace by default
	W1025 10:21:46.143197  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:48.642439  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:50.642613  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	I1025 10:21:52.643144  638584 node_ready.go:49] node "embed-certs-683681" is "Ready"
	I1025 10:21:52.643184  638584 node_ready.go:38] duration metric: took 10.504034315s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:52.643202  638584 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:52.643262  638584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:52.659492  638584 api_server.go:72] duration metric: took 11.030720868s to wait for apiserver process to appear ...
	I1025 10:21:52.659528  638584 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:52.659553  638584 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:21:52.666017  638584 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:21:52.667256  638584 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:52.667289  638584 api_server.go:131] duration metric: took 7.752823ms to wait for apiserver health ...
	I1025 10:21:52.667300  638584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:52.670860  638584 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:52.670907  638584 system_pods.go:61] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.670917  638584 system_pods.go:61] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.670928  638584 system_pods.go:61] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.670934  638584 system_pods.go:61] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.670944  638584 system_pods.go:61] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.670949  638584 system_pods.go:61] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.670958  638584 system_pods.go:61] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.670966  638584 system_pods.go:61] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.670977  638584 system_pods.go:74] duration metric: took 3.669298ms to wait for pod list to return data ...
	I1025 10:21:52.670994  638584 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:52.673975  638584 default_sa.go:45] found service account: "default"
	I1025 10:21:52.674010  638584 default_sa.go:55] duration metric: took 3.005154ms for default service account to be created ...
	I1025 10:21:52.674024  638584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:52.677130  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.677169  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.677181  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.677191  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.677195  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.677201  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.677206  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.677212  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.677223  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.677255  638584 retry.go:31] will retry after 207.699186ms: missing components: kube-dns
	I1025 10:21:52.889747  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.889810  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.889819  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.889834  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.889839  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.889854  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.889859  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.889867  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.889879  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.889906  638584 retry.go:31] will retry after 319.387436ms: missing components: kube-dns
	I1025 10:21:53.212708  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:53.212741  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:53.212748  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:53.212753  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:53.212757  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:53.212762  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:53.212765  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:53.212769  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:53.212772  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running
	I1025 10:21:53.212781  638584 system_pods.go:126] duration metric: took 538.748598ms to wait for k8s-apps to be running ...
	I1025 10:21:53.212792  638584 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:53.212838  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:53.227721  638584 system_svc.go:56] duration metric: took 14.91387ms WaitForService to wait for kubelet
	I1025 10:21:53.227757  638584 kubeadm.go:586] duration metric: took 11.598992037s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:53.227783  638584 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:53.231073  638584 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:53.231102  638584 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:53.231116  638584 node_conditions.go:105] duration metric: took 3.327789ms to run NodePressure ...
	I1025 10:21:53.231127  638584 start.go:241] waiting for startup goroutines ...
	I1025 10:21:53.231134  638584 start.go:246] waiting for cluster config update ...
	I1025 10:21:53.231145  638584 start.go:255] writing updated cluster config ...
	I1025 10:21:53.231433  638584 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:53.235996  638584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:53.239628  638584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.246519  638584 pod_ready.go:94] pod "coredns-66bc5c9577-545dp" is "Ready"
	I1025 10:21:54.246556  638584 pod_ready.go:86] duration metric: took 1.006903697s for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.249657  638584 pod_ready.go:83] waiting for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.254284  638584 pod_ready.go:94] pod "etcd-embed-certs-683681" is "Ready"
	I1025 10:21:54.254351  638584 pod_ready.go:86] duration metric: took 4.629709ms for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.256768  638584 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.261130  638584 pod_ready.go:94] pod "kube-apiserver-embed-certs-683681" is "Ready"
	I1025 10:21:54.261157  638584 pod_ready.go:86] duration metric: took 4.363563ms for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.263224  638584 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.443581  638584 pod_ready.go:94] pod "kube-controller-manager-embed-certs-683681" is "Ready"
	I1025 10:21:54.443610  638584 pod_ready.go:86] duration metric: took 180.36054ms for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.644082  638584 pod_ready.go:83] waiting for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.044226  638584 pod_ready.go:94] pod "kube-proxy-dbks6" is "Ready"
	I1025 10:21:55.044259  638584 pod_ready.go:86] duration metric: took 400.15124ms for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.243900  638584 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643886  638584 pod_ready.go:94] pod "kube-scheduler-embed-certs-683681" is "Ready"
	I1025 10:21:55.643919  638584 pod_ready.go:86] duration metric: took 399.992242ms for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643935  638584 pod_ready.go:40] duration metric: took 2.407895178s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:55.697477  638584 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:55.699399  638584 out.go:179] * Done! kubectl is now configured to use "embed-certs-683681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:21:10 no-preload-899665 crio[560]: time="2025-10-25T10:21:10.327883138Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:10 no-preload-899665 crio[560]: time="2025-10-25T10:21:10.529778478Z" level=info msg="Removing container: 99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750" id=bd4e7e36-1401-470c-bd3e-d9a0d24141d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:10 no-preload-899665 crio[560]: time="2025-10-25T10:21:10.54060566Z" level=info msg="Removed container 99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=bd4e7e36-1401-470c-bd3e-d9a0d24141d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.294152639Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f21de032-549d-49b4-b27d-197453a80201 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.297394337Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b6e52d1-99a8-4d44-bf4c-f939314c614b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.301211088Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b2167c94-a25b-49c6-b4e5-00385a084fb2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.301476354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.309178056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.309745542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.347517343Z" level=info msg="Created container 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b2167c94-a25b-49c6-b4e5-00385a084fb2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.348262977Z" level=info msg="Starting container: 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb" id=27e6b0eb-5e5a-4ec9-93b1-8729278f4b47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.350690333Z" level=info msg="Started container" PID=1741 containerID=1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper id=27e6b0eb-5e5a-4ec9-93b1-8729278f4b47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37572d825b67e73f53a655283b972712e6ae4e28f13f80347070ddc4faf94677
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.564548027Z" level=info msg="Removing container: 9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7" id=7ea8b6ca-28a7-4530-9a66-8478a92e31ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:20 no-preload-899665 crio[560]: time="2025-10-25T10:21:20.578649748Z" level=info msg="Removed container 9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=7ea8b6ca-28a7-4530-9a66-8478a92e31ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.42774688Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6d5073b4-d37f-476a-883c-896562180d28 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.428841492Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7ccf43fb-0f4f-4231-b881-1d87b0531b8b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.430296013Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b3d8d9fc-a54f-4b4d-8cf0-8baf9376fd28 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.430479054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.436184347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.436719415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.473714177Z" level=info msg="Created container 8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=b3d8d9fc-a54f-4b4d-8cf0-8baf9376fd28 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.47442335Z" level=info msg="Starting container: 8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6" id=f2210c78-b906-41b9-a5a4-992938f38e75 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.476853344Z" level=info msg="Started container" PID=1773 containerID=8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper id=f2210c78-b906-41b9-a5a4-992938f38e75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37572d825b67e73f53a655283b972712e6ae4e28f13f80347070ddc4faf94677
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.639251067Z" level=info msg="Removing container: 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb" id=820f914a-612d-4f87-bfe7-55355ff8e9f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:45 no-preload-899665 crio[560]: time="2025-10-25T10:21:45.65243796Z" level=info msg="Removed container 1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9/dashboard-metrics-scraper" id=820f914a-612d-4f87-bfe7-55355ff8e9f6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8cfca56338f81       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   3                   37572d825b67e       dashboard-metrics-scraper-6ffb444bf9-8krs9   kubernetes-dashboard
	6dcccb2cdcdf4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   5a24a3c930837       kubernetes-dashboard-855c9754f9-6zv5c        kubernetes-dashboard
	e435fa14f2cce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Running             storage-provisioner         1                   c567cda8d1f34       storage-provisioner                          kube-system
	22cccd3b8325d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   7d0e0eb7eb5f5       coredns-66bc5c9577-gtnvx                     kube-system
	21c1a2e862038       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   66d7c45959f2b       busybox                                      default
	7aa07387b3dad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   fa1a5ad8c2df9       kindnet-sjskf                                kube-system
	6c060dfbf2e50       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   c567cda8d1f34       storage-provisioner                          kube-system
	059ea673d4650       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   4ab1bdc0f77e3       kube-proxy-fdthr                             kube-system
	5120b28e61a32       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   674bba36c7a2f       kube-apiserver-no-preload-899665             kube-system
	352d3fd34e0c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   99f80224a8fc9       kube-scheduler-no-preload-899665             kube-system
	b199511be2bb2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   75103cdc9d767       kube-controller-manager-no-preload-899665    kube-system
	f94925c7a0544       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   77db653658186       etcd-no-preload-899665                       kube-system
	
	
	==> coredns [22cccd3b8325d38064ff3cf5dec75ac34e8ea0682f221af167776ca55146f3d7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49658 - 59182 "HINFO IN 1381080871278460682.8429651544703439985. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070735667s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-899665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-899665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=no-preload-899665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_19_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:19:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-899665
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:19:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:50 +0000   Sat, 25 Oct 2025 10:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-899665
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9552a4c0-ffdc-4517-8db3-fa4623099c2a
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-gtnvx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-899665                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-sjskf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-899665              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-899665     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-fdthr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-899665              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8krs9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6zv5c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           115s                 node-controller  Node no-preload-899665 event: Registered Node no-preload-899665 in Controller
	  Normal  NodeReady                100s                 kubelet          Node no-preload-899665 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node no-preload-899665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node no-preload-899665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node no-preload-899665 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                  node-controller  Node no-preload-899665 event: Registered Node no-preload-899665 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [f94925c7a05442fb6214b27d55f74ec54efa54bb994038837f4ee6aec190c793] <==
	{"level":"warn","ts":"2025-10-25T10:20:58.312532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.322086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.333470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.346205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.352028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.360869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.369766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.378975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.387828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.403425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.409905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.417677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.425593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.433890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.442014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.450749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.458481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.466370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.474353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.484491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.499778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.508613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.517579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:58.577225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34978","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:21:08.820530Z","caller":"traceutil/trace.go:172","msg":"trace[196694558] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"213.714466ms","start":"2025-10-25T10:21:08.606793Z","end":"2025-10-25T10:21:08.820508Z","steps":["trace[196694558] 'process raft request'  (duration: 126.577035ms)","trace[196694558] 'compare'  (duration: 87.039951ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:21:57 up  2:04,  0 user,  load average: 5.48, 5.16, 5.97
	Linux no-preload-899665 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7aa07387b3dadb428f650a505ba419b3a80a74e2038ef9adb6684c94298a0ca5] <==
	I1025 10:21:00.000938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:21:00.001205       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:21:00.001408       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:21:00.001431       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:21:00.001471       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:21:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:21:00.302283       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:21:00.302372       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:21:00.302386       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:21:00.400112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:21:00.602603       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:21:00.602654       1 metrics.go:72] Registering metrics
	I1025 10:21:00.602733       1 controller.go:711] "Syncing nftables rules"
	I1025 10:21:10.300482       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:10.300559       1 main.go:301] handling current node
	I1025 10:21:20.307439       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:20.307474       1 main.go:301] handling current node
	I1025 10:21:30.300470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:30.300545       1 main.go:301] handling current node
	I1025 10:21:40.300734       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:40.300777       1 main.go:301] handling current node
	I1025 10:21:50.302346       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:21:50.302406       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5120b28e61a325e39f449795f46e9d4332fe4fe8d721f0cb753fff3aeddf5964] <==
	I1025 10:20:59.148185       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:59.148192       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:59.148986       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:20:59.149249       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:20:59.149308       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:20:59.149375       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:20:59.151405       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:20:59.156394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:20:59.157981       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:20:59.158071       1 policy_source.go:240] refreshing policies
	E1025 10:20:59.162930       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:20:59.204115       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:59.242201       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:59.434239       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:59.532099       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:20:59.566674       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:59.588213       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:59.606237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:59.664347       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.116.208"}
	I1025 10:20:59.676119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.147.92"}
	I1025 10:21:00.052893       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:21:02.070551       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:02.368240       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:02.368240       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:02.468459       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b199511be2bb272a9b6fcefc2c7f2d0cc2c364bcb33d5762b0f79b58442e445a] <==
	I1025 10:21:01.913902       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-899665"
	I1025 10:21:01.913962       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:21:01.915109       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:21:01.915117       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:21:01.915262       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:21:01.915491       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:21:01.915508       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:21:01.915527       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:21:01.915661       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:21:01.915732       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:21:01.915703       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:21:01.915943       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:21:01.917366       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:21:01.922002       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:21:01.922024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:21:01.922034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:21:01.922000       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:21:01.923134       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:01.924259       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:21:01.925488       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:21:01.926802       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:21:01.938148       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:21:01.939384       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:21:01.939400       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:21:01.944799       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [059ea673d4650d6e7e9628b8a7cf58c09fb38646edaba28e0ed69edba66a5ad8] <==
	I1025 10:20:59.819138       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:20:59.895465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:20:59.996550       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:20:59.996601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:20:59.996682       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:21:00.017955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:21:00.018043       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:21:00.024614       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:21:00.025076       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:21:00.025110       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:00.026965       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:21:00.026988       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:21:00.027108       1 config.go:309] "Starting node config controller"
	I1025 10:21:00.027116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:21:00.027382       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:21:00.027419       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:21:00.027832       1 config.go:200] "Starting service config controller"
	I1025 10:21:00.028488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:21:00.127192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:21:00.127212       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:21:00.127789       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:21:00.128918       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [352d3fd34e0c2d541fcf1e1a74e6466f8d1c2eeb5794c69f26b05784aa993d7f] <==
	I1025 10:20:57.507887       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:20:59.094808       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:20:59.094932       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:20:59.094949       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:20:59.094966       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:20:59.155691       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:20:59.155735       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:20:59.159110       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:59.159205       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:20:59.159209       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:20:59.159057       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:20:59.260410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:21:02 no-preload-899665 kubelet[703]: I1025 10:21:02.551781     703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1c1c50ff-70c9-457a-a5e5-dd294a77f730-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6zv5c\" (UID: \"1c1c50ff-70c9-457a-a5e5-dd294a77f730\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6zv5c"
	Oct 25 10:21:07 no-preload-899665 kubelet[703]: I1025 10:21:07.695408     703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6zv5c" podStartSLOduration=1.714452143 podStartE2EDuration="5.695383288s" podCreationTimestamp="2025-10-25 10:21:02 +0000 UTC" firstStartedPulling="2025-10-25 10:21:02.772614366 +0000 UTC m=+6.529933892" lastFinishedPulling="2025-10-25 10:21:06.753545511 +0000 UTC m=+10.510865037" observedRunningTime="2025-10-25 10:21:07.531669078 +0000 UTC m=+11.288988611" watchObservedRunningTime="2025-10-25 10:21:07.695383288 +0000 UTC m=+11.452702823"
	Oct 25 10:21:08 no-preload-899665 kubelet[703]: I1025 10:21:08.963071     703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:21:09 no-preload-899665 kubelet[703]: I1025 10:21:09.521743     703 scope.go:117] "RemoveContainer" containerID="99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750"
	Oct 25 10:21:10 no-preload-899665 kubelet[703]: I1025 10:21:10.527493     703 scope.go:117] "RemoveContainer" containerID="99258514298e27b07b8a53db94e30c375ba94bdec5b4c3c6ff8fb28e14743750"
	Oct 25 10:21:10 no-preload-899665 kubelet[703]: I1025 10:21:10.527538     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:10 no-preload-899665 kubelet[703]: E1025 10:21:10.527696     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:11 no-preload-899665 kubelet[703]: I1025 10:21:11.532586     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:11 no-preload-899665 kubelet[703]: E1025 10:21:11.532830     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: I1025 10:21:20.293508     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: I1025 10:21:20.562596     703 scope.go:117] "RemoveContainer" containerID="9154c42e2b9fb263fd3a632e65db5c99f3e7b9406d6433a4fca9383898cc09c7"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: I1025 10:21:20.562827     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:20 no-preload-899665 kubelet[703]: E1025 10:21:20.563037     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:30 no-preload-899665 kubelet[703]: I1025 10:21:30.293426     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:30 no-preload-899665 kubelet[703]: E1025 10:21:30.293636     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: I1025 10:21:45.427135     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: I1025 10:21:45.637917     703 scope.go:117] "RemoveContainer" containerID="1d647509766f8d83b8f1b6c648758ecc56de325140d4ff1e14dee9c0449bffbb"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: I1025 10:21:45.638147     703 scope.go:117] "RemoveContainer" containerID="8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	Oct 25 10:21:45 no-preload-899665 kubelet[703]: E1025 10:21:45.638400     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:50 no-preload-899665 kubelet[703]: I1025 10:21:50.293855     703 scope.go:117] "RemoveContainer" containerID="8cfca56338f81739721a8fc6791605752dfe0bc05037803fa23ac142fec9a9e6"
	Oct 25 10:21:50 no-preload-899665 kubelet[703]: E1025 10:21:50.294042     703 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8krs9_kubernetes-dashboard(6682609d-acec-4445-8e7c-e544d9877ae8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8krs9" podUID="6682609d-acec-4445-8e7c-e544d9877ae8"
	Oct 25 10:21:53 no-preload-899665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:21:53 no-preload-899665 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:21:53 no-preload-899665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:21:53 no-preload-899665 systemd[1]: kubelet.service: Consumed 2.022s CPU time.
	
	
	==> kubernetes-dashboard [6dcccb2cdcdf4276c8b975282d608c7438084301444b6d594bdeb6eb819546b9] <==
	2025/10/25 10:21:06 Starting overwatch
	2025/10/25 10:21:06 Using namespace: kubernetes-dashboard
	2025/10/25 10:21:06 Using in-cluster config to connect to apiserver
	2025/10/25 10:21:06 Using secret token for csrf signing
	2025/10/25 10:21:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:21:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:21:06 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:21:06 Generating JWE encryption key
	2025/10/25 10:21:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:21:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:21:07 Initializing JWE encryption key from synchronized object
	2025/10/25 10:21:07 Creating in-cluster Sidecar client
	2025/10/25 10:21:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:21:07 Serving insecurely on HTTP port: 9090
	2025/10/25 10:21:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6c060dfbf2e501de983eb8ec105f8a398270827cd89f6a0aa1efc2893da367a6] <==
	I1025 10:20:59.773876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:20:59.777696       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e435fa14f2cceba2eb3f8f15eb6412ef2454dbc3812f08964c402cf1e6522851] <==
	W1025 10:21:32.041353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:34.045263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:34.050461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:36.054145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:36.058745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:38.061994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:38.075559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:40.079232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:40.084736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:42.089007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:42.094080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.097107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.102291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.105426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.109573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.113299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.118540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.122468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.126560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.130681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.136425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.140217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.144383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.147298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.151707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-899665 -n no-preload-899665
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-899665 -n no-preload-899665: exit status 2 (376.28575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-899665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-767846 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-767846 --alsologtostderr -v=1: exit status 80 (1.770822045s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-767846 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:22:00.999868  647497 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:22:01.000235  647497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:01.000249  647497 out.go:374] Setting ErrFile to fd 2...
	I1025 10:22:01.000256  647497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:01.000590  647497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:22:01.000947  647497 out.go:368] Setting JSON to false
	I1025 10:22:01.001012  647497 mustload.go:65] Loading cluster: default-k8s-diff-port-767846
	I1025 10:22:01.001580  647497 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:01.002237  647497 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:22:01.024135  647497 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:22:01.024525  647497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:22:01.090623  647497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:68 SystemTime:2025-10-25 10:22:01.078696882 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:22:01.091311  647497 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-767846 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:22:01.095879  647497 out.go:179] * Pausing node default-k8s-diff-port-767846 ... 
	I1025 10:22:01.098024  647497 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:22:01.098340  647497 ssh_runner.go:195] Run: systemctl --version
	I1025 10:22:01.098386  647497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:22:01.119371  647497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:22:01.225906  647497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:22:01.256390  647497 pause.go:52] kubelet running: true
	I1025 10:22:01.256465  647497 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:22:01.416760  647497 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:22:01.416857  647497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:22:01.497698  647497 cri.go:89] found id: "24856409af1d28dfd7c81bbb566035594b19ffe4e449271ef2769f0a51f01272"
	I1025 10:22:01.497723  647497 cri.go:89] found id: "09e2459273fad439995d9ffdb8adfd372d7c377970843fbc1f657d31bc15c555"
	I1025 10:22:01.497728  647497 cri.go:89] found id: "2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248"
	I1025 10:22:01.497732  647497 cri.go:89] found id: "ca8e9fdba848b911be60a6b3b46d5c7a4141cbb69f8d11609a1d58392aeee7c1"
	I1025 10:22:01.497735  647497 cri.go:89] found id: "040afacf3651f3df296c0fb9e05451bd6f2a7e10325871a10ea903d99da7a876"
	I1025 10:22:01.497741  647497 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:22:01.497743  647497 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:22:01.497746  647497 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:22:01.497748  647497 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:22:01.497773  647497 cri.go:89] found id: "1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	I1025 10:22:01.497777  647497 cri.go:89] found id: "fb5a07f67d104ece5c4e59cf02a6acaa20151d01116039e6818d51c497d4e740"
	I1025 10:22:01.497781  647497 cri.go:89] found id: ""
	I1025 10:22:01.497829  647497 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:22:01.512305  647497 retry.go:31] will retry after 337.128073ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:01Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:22:01.849877  647497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:22:01.865298  647497 pause.go:52] kubelet running: false
	I1025 10:22:01.865395  647497 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:22:02.010979  647497 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:22:02.011059  647497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:22:02.086738  647497 cri.go:89] found id: "24856409af1d28dfd7c81bbb566035594b19ffe4e449271ef2769f0a51f01272"
	I1025 10:22:02.086767  647497 cri.go:89] found id: "09e2459273fad439995d9ffdb8adfd372d7c377970843fbc1f657d31bc15c555"
	I1025 10:22:02.086772  647497 cri.go:89] found id: "2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248"
	I1025 10:22:02.086777  647497 cri.go:89] found id: "ca8e9fdba848b911be60a6b3b46d5c7a4141cbb69f8d11609a1d58392aeee7c1"
	I1025 10:22:02.086781  647497 cri.go:89] found id: "040afacf3651f3df296c0fb9e05451bd6f2a7e10325871a10ea903d99da7a876"
	I1025 10:22:02.086786  647497 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:22:02.086800  647497 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:22:02.086804  647497 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:22:02.086808  647497 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:22:02.086817  647497 cri.go:89] found id: "1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	I1025 10:22:02.086821  647497 cri.go:89] found id: "fb5a07f67d104ece5c4e59cf02a6acaa20151d01116039e6818d51c497d4e740"
	I1025 10:22:02.086826  647497 cri.go:89] found id: ""
	I1025 10:22:02.086882  647497 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:22:02.100678  647497 retry.go:31] will retry after 332.582099ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:02Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:22:02.434421  647497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:22:02.450183  647497 pause.go:52] kubelet running: false
	I1025 10:22:02.450240  647497 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:22:02.591497  647497 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:22:02.591619  647497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:22:02.672382  647497 cri.go:89] found id: "24856409af1d28dfd7c81bbb566035594b19ffe4e449271ef2769f0a51f01272"
	I1025 10:22:02.672416  647497 cri.go:89] found id: "09e2459273fad439995d9ffdb8adfd372d7c377970843fbc1f657d31bc15c555"
	I1025 10:22:02.672420  647497 cri.go:89] found id: "2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248"
	I1025 10:22:02.672426  647497 cri.go:89] found id: "ca8e9fdba848b911be60a6b3b46d5c7a4141cbb69f8d11609a1d58392aeee7c1"
	I1025 10:22:02.672429  647497 cri.go:89] found id: "040afacf3651f3df296c0fb9e05451bd6f2a7e10325871a10ea903d99da7a876"
	I1025 10:22:02.672432  647497 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:22:02.672437  647497 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:22:02.672439  647497 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:22:02.672442  647497 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:22:02.672449  647497 cri.go:89] found id: "1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	I1025 10:22:02.672452  647497 cri.go:89] found id: "fb5a07f67d104ece5c4e59cf02a6acaa20151d01116039e6818d51c497d4e740"
	I1025 10:22:02.672454  647497 cri.go:89] found id: ""
	I1025 10:22:02.672506  647497 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:22:02.688651  647497 out.go:203] 
	W1025 10:22:02.690018  647497 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:22:02.690039  647497 out.go:285] * 
	* 
	W1025 10:22:02.694287  647497 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:22:02.696124  647497 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-767846 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-767846
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-767846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058",
	        "Created": "2025-10-25T10:19:56.495133916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 636801,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:21:04.318503167Z",
	            "FinishedAt": "2025-10-25T10:21:03.24730017Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/hostname",
	        "HostsPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/hosts",
	        "LogPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058-json.log",
	        "Name": "/default-k8s-diff-port-767846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-767846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-767846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058",
	                "LowerDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-767846",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-767846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-767846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-767846",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-767846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e205027ecee405decb8328b526c337d6ec42c4c95dbb4a7547276c93105f899",
	            "SandboxKey": "/var/run/docker/netns/0e205027ecee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-767846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:62:a1:05:09:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49994b8d670ad539016da4784c6cdaa9b9b52e8e74fc4aee0b1293b182f436c0",
	                    "EndpointID": "b9bae7737696beddf7f7522975c63359192721ef0c70428ee84d6b262898ffa6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-767846",
	                        "a861cbbe8f62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846: exit status 2 (358.403442ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-767846 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-767846 logs -n 25: (1.229418881s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p newest-cni-667966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                                                                                                    │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                                                                                          │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-767846 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-767846 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                                                                                          │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 10:21:29.331743  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:31.831906  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:30.254936  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:32.256411  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.665150  638584 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:21:36.665238  638584 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:21:36.665366  638584 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:21:36.665424  638584 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:21:36.665455  638584 kubeadm.go:318] OS: Linux
	I1025 10:21:36.665528  638584 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:21:36.665640  638584 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:21:36.665711  638584 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:21:36.665755  638584 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:21:36.665836  638584 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:21:36.665906  638584 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:21:36.665989  638584 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:21:36.666061  638584 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:21:36.666164  638584 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:21:36.666287  638584 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:21:36.666443  638584 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:21:36.666505  638584 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:21:36.668101  638584 out.go:252]   - Generating certificates and keys ...
	I1025 10:21:36.668178  638584 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:21:36.668239  638584 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:21:36.668297  638584 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:21:36.668408  638584 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:21:36.668487  638584 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:21:36.668570  638584 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:21:36.668632  638584 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:21:36.669282  638584 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669368  638584 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:21:36.669522  638584 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669602  638584 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:21:36.669681  638584 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:21:36.669732  638584 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:21:36.669795  638584 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:21:36.669856  638584 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:21:36.669922  638584 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:21:36.669975  638584 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:21:36.670054  638584 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:21:36.670110  638584 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:21:36.670198  638584 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:21:36.670268  638584 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:21:36.673336  638584 out.go:252]   - Booting up control plane ...
	I1025 10:21:36.673471  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:21:36.673585  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:21:36.673666  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:21:36.673811  638584 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:21:36.673918  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:21:36.674052  638584 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:21:36.674150  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:21:36.674197  638584 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:21:36.674448  638584 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:21:36.674610  638584 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:21:36.674735  638584 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.921842ms
	I1025 10:21:36.674869  638584 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:21:36.674985  638584 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:21:36.675113  638584 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:21:36.675225  638584 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:21:36.675373  638584 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848539291s
	I1025 10:21:36.675485  638584 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.099917517s
	I1025 10:21:36.675576  638584 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501482903s
	I1025 10:21:36.675749  638584 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:21:36.675902  638584 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:21:36.675992  638584 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:21:36.676186  638584 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-683681 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:21:36.676270  638584 kubeadm.go:318] [bootstrap-token] Using token: gh3e3n.vi8ppuvnf3ix9l58
	I1025 10:21:36.678455  638584 out.go:252]   - Configuring RBAC rules ...
	I1025 10:21:36.678655  638584 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:21:36.678741  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:21:36.678915  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:21:36.679094  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:21:36.679206  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:21:36.679286  638584 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:21:36.679483  638584 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:21:36.679551  638584 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:21:36.679620  638584 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:21:36.679632  638584 kubeadm.go:318] 
	I1025 10:21:36.679721  638584 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:21:36.679732  638584 kubeadm.go:318] 
	I1025 10:21:36.679835  638584 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:21:36.679845  638584 kubeadm.go:318] 
	I1025 10:21:36.679882  638584 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:21:36.679977  638584 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:21:36.680061  638584 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:21:36.680070  638584 kubeadm.go:318] 
	I1025 10:21:36.680154  638584 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:21:36.680170  638584 kubeadm.go:318] 
	I1025 10:21:36.680221  638584 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:21:36.680229  638584 kubeadm.go:318] 
	I1025 10:21:36.680289  638584 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:21:36.680387  638584 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:21:36.680463  638584 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:21:36.680471  638584 kubeadm.go:318] 
	I1025 10:21:36.680563  638584 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:21:36.680661  638584 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:21:36.680670  638584 kubeadm.go:318] 
	I1025 10:21:36.680776  638584 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.680932  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:21:36.680959  638584 kubeadm.go:318] 	--control-plane 
	I1025 10:21:36.680967  638584 kubeadm.go:318] 
	I1025 10:21:36.681062  638584 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:21:36.681073  638584 kubeadm.go:318] 
	I1025 10:21:36.681190  638584 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.681350  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:21:36.681383  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:36.681402  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:36.685048  638584 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:21:34.332728  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:36.832195  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:34.756305  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:37.255124  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.686372  638584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:21:36.691990  638584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:21:36.692012  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:21:36.711248  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:21:36.950001  638584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:21:36.950063  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:36.950140  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-683681 minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=embed-certs-683681 minikube.k8s.io/primary=true
	I1025 10:21:36.962716  638584 ops.go:34] apiserver oom_adj: -16
	I1025 10:21:37.040626  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:37.541457  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.041452  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.541265  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.041583  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.541553  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:40.041803  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.330926  631515 pod_ready.go:94] pod "coredns-66bc5c9577-gtnvx" is "Ready"
	I1025 10:21:39.330956  631515 pod_ready.go:86] duration metric: took 38.506063732s for pod "coredns-66bc5c9577-gtnvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.333923  631515 pod_ready.go:83] waiting for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.338091  631515 pod_ready.go:94] pod "etcd-no-preload-899665" is "Ready"
	I1025 10:21:39.338119  631515 pod_ready.go:86] duration metric: took 4.169551ms for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.340510  631515 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.344782  631515 pod_ready.go:94] pod "kube-apiserver-no-preload-899665" is "Ready"
	I1025 10:21:39.344808  631515 pod_ready.go:86] duration metric: took 4.267435ms for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.346928  631515 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.527867  631515 pod_ready.go:94] pod "kube-controller-manager-no-preload-899665" is "Ready"
	I1025 10:21:39.527898  631515 pod_ready.go:86] duration metric: took 180.948376ms for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.728099  631515 pod_ready.go:83] waiting for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.129442  631515 pod_ready.go:94] pod "kube-proxy-fdthr" is "Ready"
	I1025 10:21:40.129471  631515 pod_ready.go:86] duration metric: took 401.343438ms for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.329196  631515 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728428  631515 pod_ready.go:94] pod "kube-scheduler-no-preload-899665" is "Ready"
	I1025 10:21:40.728461  631515 pod_ready.go:86] duration metric: took 399.238728ms for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728477  631515 pod_ready.go:40] duration metric: took 39.908384057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:40.776763  631515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:40.778765  631515 out.go:179] * Done! kubectl is now configured to use "no-preload-899665" cluster and "default" namespace by default
	I1025 10:21:40.541552  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.041202  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.540928  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.626698  638584 kubeadm.go:1113] duration metric: took 4.676682024s to wait for elevateKubeSystemPrivileges
	I1025 10:21:41.626740  638584 kubeadm.go:402] duration metric: took 15.779813606s to StartCluster
	I1025 10:21:41.626763  638584 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.626844  638584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:41.628485  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.628738  638584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:41.628758  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:21:41.628815  638584 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:41.628922  638584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:21:41.628947  638584 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:21:41.628984  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.628970  638584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:21:41.629014  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:41.629033  638584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:21:41.629466  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.629530  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.632478  638584 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:41.635235  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:41.654284  638584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:41.655720  638584 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	I1025 10:21:41.655762  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.656106  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.656203  638584 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.656228  638584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:41.656290  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.679823  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.684242  638584 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.684268  638584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:41.684345  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.712034  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.726056  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:21:41.804301  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.809475  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:41.831472  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.912561  638584 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 10:21:42.139096  638584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:42.145509  638584 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1025 10:21:39.755018  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:41.756413  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:42.146900  638584 addons.go:514] duration metric: took 518.085843ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:21:42.416647  638584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-683681" context rescaled to 1 replicas
	W1025 10:21:44.142621  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:44.256001  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:46.755543  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:47.755253  636484 pod_ready.go:94] pod "coredns-66bc5c9577-rznxv" is "Ready"
	I1025 10:21:47.755285  636484 pod_ready.go:86] duration metric: took 31.006445495s for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.758305  636484 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.763202  636484 pod_ready.go:94] pod "etcd-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.763230  636484 pod_ready.go:86] duration metric: took 4.871359ms for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.765533  636484 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.769981  636484 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.770085  636484 pod_ready.go:86] duration metric: took 4.518205ms for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.772484  636484 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.952605  636484 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.952636  636484 pod_ready.go:86] duration metric: took 180.129601ms for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.153608  636484 pod_ready.go:83] waiting for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.552560  636484 pod_ready.go:94] pod "kube-proxy-cvm5c" is "Ready"
	I1025 10:21:48.552591  636484 pod_ready.go:86] duration metric: took 398.954024ms for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.753044  636484 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152785  636484 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:49.152816  636484 pod_ready.go:86] duration metric: took 399.744601ms for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152828  636484 pod_ready.go:40] duration metric: took 32.410721068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:49.201278  636484 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:49.203247  636484 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-767846" cluster and "default" namespace by default
	W1025 10:21:46.143197  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:48.642439  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:50.642613  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	I1025 10:21:52.643144  638584 node_ready.go:49] node "embed-certs-683681" is "Ready"
	I1025 10:21:52.643184  638584 node_ready.go:38] duration metric: took 10.504034315s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:52.643202  638584 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:52.643262  638584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:52.659492  638584 api_server.go:72] duration metric: took 11.030720868s to wait for apiserver process to appear ...
	I1025 10:21:52.659528  638584 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:52.659553  638584 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:21:52.666017  638584 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:21:52.667256  638584 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:52.667289  638584 api_server.go:131] duration metric: took 7.752823ms to wait for apiserver health ...
	I1025 10:21:52.667300  638584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:52.670860  638584 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:52.670907  638584 system_pods.go:61] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.670917  638584 system_pods.go:61] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.670928  638584 system_pods.go:61] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.670934  638584 system_pods.go:61] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.670944  638584 system_pods.go:61] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.670949  638584 system_pods.go:61] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.670958  638584 system_pods.go:61] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.670966  638584 system_pods.go:61] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.670977  638584 system_pods.go:74] duration metric: took 3.669298ms to wait for pod list to return data ...
	I1025 10:21:52.670994  638584 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:52.673975  638584 default_sa.go:45] found service account: "default"
	I1025 10:21:52.674010  638584 default_sa.go:55] duration metric: took 3.005154ms for default service account to be created ...
	I1025 10:21:52.674024  638584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:52.677130  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.677169  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.677181  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.677191  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.677195  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.677201  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.677206  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.677212  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.677223  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.677255  638584 retry.go:31] will retry after 207.699186ms: missing components: kube-dns
	I1025 10:21:52.889747  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.889810  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.889819  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.889834  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.889839  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.889854  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.889859  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.889867  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.889879  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.889906  638584 retry.go:31] will retry after 319.387436ms: missing components: kube-dns
	I1025 10:21:53.212708  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:53.212741  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:53.212748  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:53.212753  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:53.212757  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:53.212762  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:53.212765  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:53.212769  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:53.212772  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running
	I1025 10:21:53.212781  638584 system_pods.go:126] duration metric: took 538.748598ms to wait for k8s-apps to be running ...
	I1025 10:21:53.212792  638584 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:53.212838  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:53.227721  638584 system_svc.go:56] duration metric: took 14.91387ms WaitForService to wait for kubelet
	I1025 10:21:53.227757  638584 kubeadm.go:586] duration metric: took 11.598992037s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:53.227783  638584 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:53.231073  638584 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:53.231102  638584 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:53.231116  638584 node_conditions.go:105] duration metric: took 3.327789ms to run NodePressure ...
	I1025 10:21:53.231127  638584 start.go:241] waiting for startup goroutines ...
	I1025 10:21:53.231134  638584 start.go:246] waiting for cluster config update ...
	I1025 10:21:53.231145  638584 start.go:255] writing updated cluster config ...
	I1025 10:21:53.231433  638584 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:53.235996  638584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:53.239628  638584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.246519  638584 pod_ready.go:94] pod "coredns-66bc5c9577-545dp" is "Ready"
	I1025 10:21:54.246556  638584 pod_ready.go:86] duration metric: took 1.006903697s for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.249657  638584 pod_ready.go:83] waiting for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.254284  638584 pod_ready.go:94] pod "etcd-embed-certs-683681" is "Ready"
	I1025 10:21:54.254351  638584 pod_ready.go:86] duration metric: took 4.629709ms for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.256768  638584 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.261130  638584 pod_ready.go:94] pod "kube-apiserver-embed-certs-683681" is "Ready"
	I1025 10:21:54.261157  638584 pod_ready.go:86] duration metric: took 4.363563ms for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.263224  638584 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.443581  638584 pod_ready.go:94] pod "kube-controller-manager-embed-certs-683681" is "Ready"
	I1025 10:21:54.443610  638584 pod_ready.go:86] duration metric: took 180.36054ms for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.644082  638584 pod_ready.go:83] waiting for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.044226  638584 pod_ready.go:94] pod "kube-proxy-dbks6" is "Ready"
	I1025 10:21:55.044259  638584 pod_ready.go:86] duration metric: took 400.15124ms for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.243900  638584 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643886  638584 pod_ready.go:94] pod "kube-scheduler-embed-certs-683681" is "Ready"
	I1025 10:21:55.643919  638584 pod_ready.go:86] duration metric: took 399.992242ms for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643935  638584 pod_ready.go:40] duration metric: took 2.407895178s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:55.697477  638584 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:55.699399  638584 out.go:179] * Done! kubectl is now configured to use "embed-certs-683681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.587272916Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591016928Z" level=info msg="Created container ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=2cdc36db-af74-402d-823a-e985d95d582f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591267616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591292385Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591310547Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591762895Z" level=info msg="Starting container: ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc" id=1a85c2d4-a979-4e1e-a5bd-3655a0b55c45 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.594184544Z" level=info msg="Started container" PID=1723 containerID=ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper id=1a85c2d4-a979-4e1e-a5bd-3655a0b55c45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a196dde484e7e357d954640a68a59c9b6256c089007961aac9fa38cccb2da18
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.596356751Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.596386294Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.596413828Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.601562104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.601591359Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:27 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:27.547144922Z" level=info msg="Removing container: aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76" id=829e32dc-17f7-4b42-a90c-838524554323 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:27 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:27.558373893Z" level=info msg="Removed container aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=829e32dc-17f7-4b42-a90c-838524554323 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.45523542Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f6eeebe-3517-47f5-8b42-29890f379a85 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.456510216Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5ec3dc0d-e2ee-423d-a981-f3023fd210d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.458035169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=7e43b90e-72fd-4da8-a0bb-1631f8d733e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.458186641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.464745392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.465305504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.502757488Z" level=info msg="Created container 1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=7e43b90e-72fd-4da8-a0bb-1631f8d733e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.503602952Z" level=info msg="Starting container: 1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15" id=7fbc00af-4480-4082-a7b4-3509e9369c53 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.506064007Z" level=info msg="Started container" PID=1795 containerID=1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper id=7fbc00af-4480-4082-a7b4-3509e9369c53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a196dde484e7e357d954640a68a59c9b6256c089007961aac9fa38cccb2da18
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.602853051Z" level=info msg="Removing container: ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc" id=c56ed25c-8d92-42f5-b04d-17d477ac91cc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.613274469Z" level=info msg="Removed container ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=c56ed25c-8d92-42f5-b04d-17d477ac91cc name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1c249100b1cdb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   7a196dde484e7       dashboard-metrics-scraper-6ffb444bf9-vbr9p             kubernetes-dashboard
	fb5a07f67d104       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   083689c88ea02       kubernetes-dashboard-855c9754f9-wzpft                  kubernetes-dashboard
	24856409af1d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Running             storage-provisioner         1                   0bf2c373fa8bc       storage-provisioner                                    kube-system
	fed6cef8fa113       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   8da6339621c64       busybox                                                default
	09e2459273fad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   d2a2600813b0c       kindnet-vcqs2                                          kube-system
	2f0454c1c473b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   0bf2c373fa8bc       storage-provisioner                                    kube-system
	ca8e9fdba848b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   6809cb7f0bba0       coredns-66bc5c9577-rznxv                               kube-system
	040afacf3651f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   292c406022822       kube-proxy-cvm5c                                       kube-system
	5651b5355eb31       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   5af95fde8cdc4       etcd-default-k8s-diff-port-767846                      kube-system
	4a3076ac0e1e7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   c2666612f8730       kube-controller-manager-default-k8s-diff-port-767846   kube-system
	19816f19d39c5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   d8062c2d8805f       kube-scheduler-default-k8s-diff-port-767846            kube-system
	93e7c0501a9a9       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   46c18b51b3782       kube-apiserver-default-k8s-diff-port-767846            kube-system
	
	
	==> coredns [ca8e9fdba848b911be60a6b3b46d5c7a4141cbb69f8d11609a1d58392aeee7c1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40328 - 50857 "HINFO IN 9163499815538976087.2896879621534406158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.108690778s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-767846
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-767846
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=default-k8s-diff-port-767846
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-767846
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-767846
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                993ff0b7-fce7-4433-b2bb-acc59f575ba5
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-rznxv                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-default-k8s-diff-port-767846                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-vcqs2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-default-k8s-diff-port-767846             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-767846    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-cvm5c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-default-k8s-diff-port-767846             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vbr9p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wzpft                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node default-k8s-diff-port-767846 event: Registered Node default-k8s-diff-port-767846 in Controller
	  Normal  NodeReady                90s                kubelet          Node default-k8s-diff-port-767846 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node default-k8s-diff-port-767846 event: Registered Node default-k8s-diff-port-767846 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1] <==
	{"level":"warn","ts":"2025-10-25T10:21:15.388722Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.693598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:21:15.389000Z","caller":"traceutil/trace.go:172","msg":"trace[1265438700] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:458; }","duration":"145.976612ms","start":"2025-10-25T10:21:15.243009Z","end":"2025-10-25T10:21:15.388986Z","steps":["trace[1265438700] 'agreement among raft nodes before linearized reading'  (duration: 113.85484ms)","trace[1265438700] 'range keys from in-memory index tree'  (duration: 31.820531ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.389121Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.555005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-767846.1871b4c14445cdb9\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-10-25T10:21:15.389168Z","caller":"traceutil/trace.go:172","msg":"trace[1125304537] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-767846.1871b4c14445cdb9; range_end:; response_count:1; response_revision:460; }","duration":"121.607993ms","start":"2025-10-25T10:21:15.267550Z","end":"2025-10-25T10:21:15.389158Z","steps":["trace[1125304537] 'agreement among raft nodes before linearized reading'  (duration: 121.466738ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.526678Z","caller":"traceutil/trace.go:172","msg":"trace[626839294] linearizableReadLoop","detail":"{readStateIndex:491; appliedIndex:491; }","duration":"126.89204ms","start":"2025-10-25T10:21:15.399749Z","end":"2025-10-25T10:21:15.526641Z","steps":["trace[626839294] 'read index received'  (duration: 126.881605ms)","trace[626839294] 'applied index is now lower than readState.Index'  (duration: 8.668µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.560301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.520116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-10-25T10:21:15.560470Z","caller":"traceutil/trace.go:172","msg":"trace[941068557] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:461; }","duration":"160.708342ms","start":"2025-10-25T10:21:15.399740Z","end":"2025-10-25T10:21:15.560449Z","steps":["trace[941068557] 'agreement among raft nodes before linearized reading'  (duration: 127.044473ms)","trace[941068557] 'range keys from in-memory index tree'  (duration: 33.34065ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:15.560527Z","caller":"traceutil/trace.go:172","msg":"trace[102056638] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"156.010479ms","start":"2025-10-25T10:21:15.404499Z","end":"2025-10-25T10:21:15.560510Z","steps":["trace[102056638] 'process raft request'  (duration: 155.955149ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.560777Z","caller":"traceutil/trace.go:172","msg":"trace[1661513789] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"161.2039ms","start":"2025-10-25T10:21:15.399556Z","end":"2025-10-25T10:21:15.560760Z","steps":["trace[1661513789] 'process raft request'  (duration: 127.234079ms)","trace[1661513789] 'compare'  (duration: 33.40684ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:15.560871Z","caller":"traceutil/trace.go:172","msg":"trace[827880719] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"159.886445ms","start":"2025-10-25T10:21:15.400971Z","end":"2025-10-25T10:21:15.560857Z","steps":["trace[827880719] 'process raft request'  (duration: 159.40968ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.688574Z","caller":"traceutil/trace.go:172","msg":"trace[936086823] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:494; }","duration":"116.398024ms","start":"2025-10-25T10:21:15.572145Z","end":"2025-10-25T10:21:15.688543Z","steps":["trace[936086823] 'read index received'  (duration: 116.382763ms)","trace[936086823] 'applied index is now lower than readState.Index'  (duration: 12.749µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.740073Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.903144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/cluster-admin\" limit:1 ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2025-10-25T10:21:15.740139Z","caller":"traceutil/trace.go:172","msg":"trace[669493993] range","detail":"{range_begin:/registry/clusterroles/cluster-admin; range_end:; response_count:1; response_revision:464; }","duration":"167.986884ms","start":"2025-10-25T10:21:15.572135Z","end":"2025-10-25T10:21:15.740122Z","steps":["trace[669493993] 'agreement among raft nodes before linearized reading'  (duration: 116.482321ms)","trace[669493993] 'range keys from in-memory index tree'  (duration: 51.270223ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:15.740211Z","caller":"traceutil/trace.go:172","msg":"trace[423577085] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"164.616026ms","start":"2025-10-25T10:21:15.575585Z","end":"2025-10-25T10:21:15.740201Z","steps":["trace[423577085] 'process raft request'  (duration: 164.560143ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.740200Z","caller":"traceutil/trace.go:172","msg":"trace[879190047] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"170.709428ms","start":"2025-10-25T10:21:15.569462Z","end":"2025-10-25T10:21:15.740171Z","steps":["trace[879190047] 'process raft request'  (duration: 119.113975ms)","trace[879190047] 'compare'  (duration: 51.430181ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.740391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.624921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2025-10-25T10:21:15.740434Z","caller":"traceutil/trace.go:172","msg":"trace[607198520] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:466; }","duration":"167.678646ms","start":"2025-10-25T10:21:15.572743Z","end":"2025-10-25T10:21:15.740422Z","steps":["trace[607198520] 'agreement among raft nodes before linearized reading'  (duration: 167.513996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:16.080869Z","caller":"traceutil/trace.go:172","msg":"trace[713184015] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:501; }","duration":"251.623203ms","start":"2025-10-25T10:21:15.829216Z","end":"2025-10-25T10:21:16.080839Z","steps":["trace[713184015] 'read index received'  (duration: 251.613807ms)","trace[713184015] 'applied index is now lower than readState.Index'  (duration: 8.15µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:16.081544Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.951903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:21:16.082429Z","caller":"traceutil/trace.go:172","msg":"trace[2103869838] range","detail":"{range_begin:/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:471; }","duration":"257.920856ms","start":"2025-10-25T10:21:15.824486Z","end":"2025-10-25T10:21:16.082407Z","steps":["trace[2103869838] 'agreement among raft nodes before linearized reading'  (duration: 256.476122ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:16.082264Z","caller":"traceutil/trace.go:172","msg":"trace[1942929163] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"253.880677ms","start":"2025-10-25T10:21:15.828365Z","end":"2025-10-25T10:21:16.082246Z","steps":["trace[1942929163] 'process raft request'  (duration: 252.650208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:21:16.082625Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"244.573435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-767846.1871b4c14445964c\" limit:1 ","response":"range_response_count:1 size:797"}
	{"level":"warn","ts":"2025-10-25T10:21:16.082431Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.027449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"info","ts":"2025-10-25T10:21:16.082848Z","caller":"traceutil/trace.go:172","msg":"trace[1376153136] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:472; }","duration":"248.447742ms","start":"2025-10-25T10:21:15.834381Z","end":"2025-10-25T10:21:16.082828Z","steps":["trace[1376153136] 'agreement among raft nodes before linearized reading'  (duration: 247.923851ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:16.082655Z","caller":"traceutil/trace.go:172","msg":"trace[614345126] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-767846.1871b4c14445964c; range_end:; response_count:1; response_revision:472; }","duration":"244.606866ms","start":"2025-10-25T10:21:15.838038Z","end":"2025-10-25T10:21:16.082645Z","steps":["trace[614345126] 'agreement among raft nodes before linearized reading'  (duration: 244.512212ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:22:03 up  2:04,  0 user,  load average: 4.96, 5.06, 5.93
	Linux default-k8s-diff-port-767846 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [09e2459273fad439995d9ffdb8adfd372d7c377970843fbc1f657d31bc15c555] <==
	I1025 10:21:16.367670       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:21:16.368094       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 10:21:16.368709       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:21:16.368832       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:21:16.368910       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:21:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:21:16.571130       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:21:16.571557       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:21:16.571588       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:21:16.571787       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:21:16.972273       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:21:16.972305       1 metrics.go:72] Registering metrics
	I1025 10:21:16.972390       1 controller.go:711] "Syncing nftables rules"
	I1025 10:21:26.571550       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:26.571614       1 main.go:301] handling current node
	I1025 10:21:36.577609       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:36.577646       1 main.go:301] handling current node
	I1025 10:21:46.571954       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:46.572015       1 main.go:301] handling current node
	I1025 10:21:56.572421       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:56.572474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b] <==
	I1025 10:21:14.008433       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:21:14.008613       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:21:14.008643       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:21:14.009108       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:21:14.009436       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:21:14.009484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:21:14.009518       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:21:14.013029       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:21:14.013145       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:21:14.014308       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:21:14.045953       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:21:14.065348       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1025 10:21:14.164149       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:21:14.469480       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:21:14.702373       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:21:15.241399       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:21:15.403851       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:21:15.796249       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:21:16.096025       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:21:16.175882       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.102.150"}
	I1025 10:21:16.194593       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.238.69"}
	I1025 10:21:18.599981       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:18.799578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:18.799578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:18.899024       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57] <==
	I1025 10:21:18.341186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:21:18.344690       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:21:18.345101       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:21:18.346291       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:21:18.346310       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:21:18.346342       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:21:18.346370       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:21:18.346497       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:21:18.346517       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:21:18.346558       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:21:18.346639       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:21:18.346710       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-767846"
	I1025 10:21:18.346760       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:21:18.351785       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:21:18.351851       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:21:18.351911       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:21:18.351922       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:21:18.351850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:18.351928       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:21:18.354064       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:18.366257       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:21:18.369614       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:21:18.371889       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:21:18.373965       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:21:18.376367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [040afacf3651f3df296c0fb9e05451bd6f2a7e10325871a10ea903d99da7a876] <==
	I1025 10:21:15.605434       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:21:15.671393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:21:15.772537       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:21:15.772587       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 10:21:15.772694       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:21:15.829911       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:21:15.829980       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:21:15.841381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:21:15.841794       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:21:15.841871       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:15.843386       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:21:15.843419       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:21:15.843506       1 config.go:200] "Starting service config controller"
	I1025 10:21:15.843518       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:21:15.843498       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:21:15.843542       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:21:15.843794       1 config.go:309] "Starting node config controller"
	I1025 10:21:15.843815       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:21:15.843823       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:21:15.943698       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:21:15.943741       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:21:15.943761       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7] <==
	I1025 10:21:13.315812       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:21:14.722073       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:21:14.722134       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:14.867401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:21:14.867457       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:21:14.867599       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:21:14.867658       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:21:14.867817       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:21:14.867915       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:21:14.867916       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:21:14.867940       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:21:14.968001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:21:14.968061       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:21:14.968133       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:21:16 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:16.488793     722 scope.go:117] "RemoveContainer" containerID="2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.071899     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb132c7c-4000-49c6-a124-5f449d55cb74-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vbr9p\" (UID: \"cb132c7c-4000-49c6-a124-5f449d55cb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.071976     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9sd\" (UniqueName: \"kubernetes.io/projected/cb132c7c-4000-49c6-a124-5f449d55cb74-kube-api-access-nd9sd\") pod \"dashboard-metrics-scraper-6ffb444bf9-vbr9p\" (UID: \"cb132c7c-4000-49c6-a124-5f449d55cb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.072015     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f628496a-a0ef-4646-bd5b-6469e37ccbd4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-wzpft\" (UID: \"f628496a-a0ef-4646-bd5b-6469e37ccbd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzpft"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.072082     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dv7l\" (UniqueName: \"kubernetes.io/projected/f628496a-a0ef-4646-bd5b-6469e37ccbd4-kube-api-access-9dv7l\") pod \"kubernetes-dashboard-855c9754f9-wzpft\" (UID: \"f628496a-a0ef-4646-bd5b-6469e37ccbd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzpft"
	Oct 25 10:21:23 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:23.603729     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzpft" podStartSLOduration=1.899663919 podStartE2EDuration="5.603684029s" podCreationTimestamp="2025-10-25 10:21:18 +0000 UTC" firstStartedPulling="2025-10-25 10:21:19.294796888 +0000 UTC m=+7.947597945" lastFinishedPulling="2025-10-25 10:21:22.99881699 +0000 UTC m=+11.651618055" observedRunningTime="2025-10-25 10:21:23.60343828 +0000 UTC m=+12.256239357" watchObservedRunningTime="2025-10-25 10:21:23.603684029 +0000 UTC m=+12.256502900"
	Oct 25 10:21:26 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:26.536898     722 scope.go:117] "RemoveContainer" containerID="aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76"
	Oct 25 10:21:27 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:27.542237     722 scope.go:117] "RemoveContainer" containerID="aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76"
	Oct 25 10:21:27 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:27.542395     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:27 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:27.542728     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:28 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:28.547571     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:28 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:28.547779     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:34 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:34.475025     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:34 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:34.475304     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:48.454577     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:48.601024     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:48.601280     722 scope.go:117] "RemoveContainer" containerID="1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:48.601568     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:54 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:54.474701     722 scope.go:117] "RemoveContainer" containerID="1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	Oct 25 10:21:54 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:54.474935     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:22:01 default-k8s-diff-port-767846 kubelet[722]: I1025 10:22:01.401339     722 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: kubelet.service: Consumed 1.858s CPU time.
	
	
	==> kubernetes-dashboard [fb5a07f67d104ece5c4e59cf02a6acaa20151d01116039e6818d51c497d4e740] <==
	2025/10/25 10:21:23 Starting overwatch
	2025/10/25 10:21:23 Using namespace: kubernetes-dashboard
	2025/10/25 10:21:23 Using in-cluster config to connect to apiserver
	2025/10/25 10:21:23 Using secret token for csrf signing
	2025/10/25 10:21:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:21:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:21:23 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:21:23 Generating JWE encryption key
	2025/10/25 10:21:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:21:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:21:23 Initializing JWE encryption key from synchronized object
	2025/10/25 10:21:23 Creating in-cluster Sidecar client
	2025/10/25 10:21:23 Serving insecurely on HTTP port: 9090
	2025/10/25 10:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:21:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [24856409af1d28dfd7c81bbb566035594b19ffe4e449271ef2769f0a51f01272] <==
	W1025 10:21:39.993182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:41.996929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:42.004025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.006843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.011396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.014524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.018737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.022535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.026933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.030059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.034512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.037578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.042017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.045944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.052531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.056181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.061122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:58.064712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:58.072193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:00.075581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:00.080468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:02.084829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:02.089721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:04.093500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:04.099883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248] <==
	I1025 10:21:15.735934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:21:15.737849       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846: exit status 2 (359.952705ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-767846
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-767846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058",
	        "Created": "2025-10-25T10:19:56.495133916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 636801,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:21:04.318503167Z",
	            "FinishedAt": "2025-10-25T10:21:03.24730017Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/hostname",
	        "HostsPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/hosts",
	        "LogPath": "/var/lib/docker/containers/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058/a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058-json.log",
	        "Name": "/default-k8s-diff-port-767846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-767846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-767846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a861cbbe8f62d16ea86405f6b301634a956e7a957a4ea978424585deabae4058",
	                "LowerDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddb4157cd5afee722521019e7523ab5e85d231f87d65a983b26a341edfbd1bbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-767846",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-767846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-767846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-767846",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-767846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e205027ecee405decb8328b526c337d6ec42c4c95dbb4a7547276c93105f899",
	            "SandboxKey": "/var/run/docker/netns/0e205027ecee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-767846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:62:a1:05:09:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49994b8d670ad539016da4784c6cdaa9b9b52e8e74fc4aee0b1293b182f436c0",
	                    "EndpointID": "b9bae7737696beddf7f7522975c63359192721ef0c70428ee84d6b262898ffa6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-767846",
	                        "a861cbbe8f62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846: exit status 2 (380.659845ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-767846 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-767846 logs -n 25: (1.296854499s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                                                                                                    │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                                                                                          │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-767846 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-767846 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                                                                                          │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 10:21:29.331743  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:31.831906  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:30.254936  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:32.256411  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.665150  638584 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:21:36.665238  638584 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:21:36.665366  638584 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:21:36.665424  638584 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:21:36.665455  638584 kubeadm.go:318] OS: Linux
	I1025 10:21:36.665528  638584 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:21:36.665640  638584 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:21:36.665711  638584 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:21:36.665755  638584 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:21:36.665836  638584 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:21:36.665906  638584 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:21:36.665989  638584 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:21:36.666061  638584 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:21:36.666164  638584 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:21:36.666287  638584 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:21:36.666443  638584 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:21:36.666505  638584 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:21:36.668101  638584 out.go:252]   - Generating certificates and keys ...
	I1025 10:21:36.668178  638584 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:21:36.668239  638584 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:21:36.668297  638584 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:21:36.668408  638584 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:21:36.668487  638584 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:21:36.668570  638584 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:21:36.668632  638584 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:21:36.669282  638584 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669368  638584 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:21:36.669522  638584 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669602  638584 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:21:36.669681  638584 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:21:36.669732  638584 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:21:36.669795  638584 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:21:36.669856  638584 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:21:36.669922  638584 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:21:36.669975  638584 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:21:36.670054  638584 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:21:36.670110  638584 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:21:36.670198  638584 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:21:36.670268  638584 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:21:36.673336  638584 out.go:252]   - Booting up control plane ...
	I1025 10:21:36.673471  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:21:36.673585  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:21:36.673666  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:21:36.673811  638584 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:21:36.673918  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:21:36.674052  638584 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:21:36.674150  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:21:36.674197  638584 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:21:36.674448  638584 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:21:36.674610  638584 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:21:36.674735  638584 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.921842ms
	I1025 10:21:36.674869  638584 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:21:36.674985  638584 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:21:36.675113  638584 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:21:36.675225  638584 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:21:36.675373  638584 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848539291s
	I1025 10:21:36.675485  638584 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.099917517s
	I1025 10:21:36.675576  638584 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501482903s
	I1025 10:21:36.675749  638584 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:21:36.675902  638584 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:21:36.675992  638584 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:21:36.676186  638584 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-683681 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:21:36.676270  638584 kubeadm.go:318] [bootstrap-token] Using token: gh3e3n.vi8ppuvnf3ix9l58
	I1025 10:21:36.678455  638584 out.go:252]   - Configuring RBAC rules ...
	I1025 10:21:36.678655  638584 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:21:36.678741  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:21:36.678915  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:21:36.679094  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:21:36.679206  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:21:36.679286  638584 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:21:36.679483  638584 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:21:36.679551  638584 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:21:36.679620  638584 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:21:36.679632  638584 kubeadm.go:318] 
	I1025 10:21:36.679721  638584 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:21:36.679732  638584 kubeadm.go:318] 
	I1025 10:21:36.679835  638584 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:21:36.679845  638584 kubeadm.go:318] 
	I1025 10:21:36.679882  638584 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:21:36.679977  638584 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:21:36.680061  638584 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:21:36.680070  638584 kubeadm.go:318] 
	I1025 10:21:36.680154  638584 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:21:36.680170  638584 kubeadm.go:318] 
	I1025 10:21:36.680221  638584 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:21:36.680229  638584 kubeadm.go:318] 
	I1025 10:21:36.680289  638584 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:21:36.680387  638584 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:21:36.680463  638584 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:21:36.680471  638584 kubeadm.go:318] 
	I1025 10:21:36.680563  638584 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:21:36.680661  638584 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:21:36.680670  638584 kubeadm.go:318] 
	I1025 10:21:36.680776  638584 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.680932  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:21:36.680959  638584 kubeadm.go:318] 	--control-plane 
	I1025 10:21:36.680967  638584 kubeadm.go:318] 
	I1025 10:21:36.681062  638584 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:21:36.681073  638584 kubeadm.go:318] 
	I1025 10:21:36.681190  638584 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.681350  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:21:36.681383  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:36.681402  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:36.685048  638584 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:21:34.332728  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:36.832195  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:34.756305  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:37.255124  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.686372  638584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:21:36.691990  638584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:21:36.692012  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:21:36.711248  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:21:36.950001  638584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:21:36.950063  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:36.950140  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-683681 minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=embed-certs-683681 minikube.k8s.io/primary=true
	I1025 10:21:36.962716  638584 ops.go:34] apiserver oom_adj: -16
	I1025 10:21:37.040626  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:37.541457  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.041452  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.541265  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.041583  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.541553  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:40.041803  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.330926  631515 pod_ready.go:94] pod "coredns-66bc5c9577-gtnvx" is "Ready"
	I1025 10:21:39.330956  631515 pod_ready.go:86] duration metric: took 38.506063732s for pod "coredns-66bc5c9577-gtnvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.333923  631515 pod_ready.go:83] waiting for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.338091  631515 pod_ready.go:94] pod "etcd-no-preload-899665" is "Ready"
	I1025 10:21:39.338119  631515 pod_ready.go:86] duration metric: took 4.169551ms for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.340510  631515 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.344782  631515 pod_ready.go:94] pod "kube-apiserver-no-preload-899665" is "Ready"
	I1025 10:21:39.344808  631515 pod_ready.go:86] duration metric: took 4.267435ms for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.346928  631515 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.527867  631515 pod_ready.go:94] pod "kube-controller-manager-no-preload-899665" is "Ready"
	I1025 10:21:39.527898  631515 pod_ready.go:86] duration metric: took 180.948376ms for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.728099  631515 pod_ready.go:83] waiting for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.129442  631515 pod_ready.go:94] pod "kube-proxy-fdthr" is "Ready"
	I1025 10:21:40.129471  631515 pod_ready.go:86] duration metric: took 401.343438ms for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.329196  631515 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728428  631515 pod_ready.go:94] pod "kube-scheduler-no-preload-899665" is "Ready"
	I1025 10:21:40.728461  631515 pod_ready.go:86] duration metric: took 399.238728ms for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728477  631515 pod_ready.go:40] duration metric: took 39.908384057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:40.776763  631515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:40.778765  631515 out.go:179] * Done! kubectl is now configured to use "no-preload-899665" cluster and "default" namespace by default
	I1025 10:21:40.541552  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.041202  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.540928  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.626698  638584 kubeadm.go:1113] duration metric: took 4.676682024s to wait for elevateKubeSystemPrivileges
	I1025 10:21:41.626740  638584 kubeadm.go:402] duration metric: took 15.779813606s to StartCluster
	I1025 10:21:41.626763  638584 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.626844  638584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:41.628485  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.628738  638584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:41.628758  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:21:41.628815  638584 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:41.628922  638584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:21:41.628947  638584 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:21:41.628984  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.628970  638584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:21:41.629014  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:41.629033  638584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:21:41.629466  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.629530  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.632478  638584 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:41.635235  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:41.654284  638584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:41.655720  638584 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	I1025 10:21:41.655762  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.656106  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.656203  638584 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.656228  638584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:41.656290  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.679823  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.684242  638584 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.684268  638584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:41.684345  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.712034  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.726056  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:21:41.804301  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.809475  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:41.831472  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.912561  638584 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 10:21:42.139096  638584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:42.145509  638584 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1025 10:21:39.755018  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:41.756413  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:42.146900  638584 addons.go:514] duration metric: took 518.085843ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:21:42.416647  638584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-683681" context rescaled to 1 replicas
	W1025 10:21:44.142621  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:44.256001  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:46.755543  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:47.755253  636484 pod_ready.go:94] pod "coredns-66bc5c9577-rznxv" is "Ready"
	I1025 10:21:47.755285  636484 pod_ready.go:86] duration metric: took 31.006445495s for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.758305  636484 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.763202  636484 pod_ready.go:94] pod "etcd-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.763230  636484 pod_ready.go:86] duration metric: took 4.871359ms for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.765533  636484 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.769981  636484 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.770085  636484 pod_ready.go:86] duration metric: took 4.518205ms for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.772484  636484 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.952605  636484 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.952636  636484 pod_ready.go:86] duration metric: took 180.129601ms for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.153608  636484 pod_ready.go:83] waiting for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.552560  636484 pod_ready.go:94] pod "kube-proxy-cvm5c" is "Ready"
	I1025 10:21:48.552591  636484 pod_ready.go:86] duration metric: took 398.954024ms for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.753044  636484 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152785  636484 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:49.152816  636484 pod_ready.go:86] duration metric: took 399.744601ms for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152828  636484 pod_ready.go:40] duration metric: took 32.410721068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:49.201278  636484 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:49.203247  636484 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-767846" cluster and "default" namespace by default
	W1025 10:21:46.143197  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:48.642439  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:50.642613  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	I1025 10:21:52.643144  638584 node_ready.go:49] node "embed-certs-683681" is "Ready"
	I1025 10:21:52.643184  638584 node_ready.go:38] duration metric: took 10.504034315s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:52.643202  638584 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:52.643262  638584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:52.659492  638584 api_server.go:72] duration metric: took 11.030720868s to wait for apiserver process to appear ...
	I1025 10:21:52.659528  638584 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:52.659553  638584 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:21:52.666017  638584 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:21:52.667256  638584 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:52.667289  638584 api_server.go:131] duration metric: took 7.752823ms to wait for apiserver health ...
	I1025 10:21:52.667300  638584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:52.670860  638584 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:52.670907  638584 system_pods.go:61] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.670917  638584 system_pods.go:61] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.670928  638584 system_pods.go:61] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.670934  638584 system_pods.go:61] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.670944  638584 system_pods.go:61] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.670949  638584 system_pods.go:61] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.670958  638584 system_pods.go:61] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.670966  638584 system_pods.go:61] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.670977  638584 system_pods.go:74] duration metric: took 3.669298ms to wait for pod list to return data ...
	I1025 10:21:52.670994  638584 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:52.673975  638584 default_sa.go:45] found service account: "default"
	I1025 10:21:52.674010  638584 default_sa.go:55] duration metric: took 3.005154ms for default service account to be created ...
	I1025 10:21:52.674024  638584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:52.677130  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.677169  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.677181  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.677191  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.677195  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.677201  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.677206  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.677212  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.677223  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.677255  638584 retry.go:31] will retry after 207.699186ms: missing components: kube-dns
	I1025 10:21:52.889747  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.889810  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.889819  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.889834  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.889839  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.889854  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.889859  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.889867  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.889879  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.889906  638584 retry.go:31] will retry after 319.387436ms: missing components: kube-dns
	I1025 10:21:53.212708  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:53.212741  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:53.212748  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:53.212753  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:53.212757  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:53.212762  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:53.212765  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:53.212769  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:53.212772  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running
	I1025 10:21:53.212781  638584 system_pods.go:126] duration metric: took 538.748598ms to wait for k8s-apps to be running ...
	I1025 10:21:53.212792  638584 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:53.212838  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:53.227721  638584 system_svc.go:56] duration metric: took 14.91387ms WaitForService to wait for kubelet
	I1025 10:21:53.227757  638584 kubeadm.go:586] duration metric: took 11.598992037s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:53.227783  638584 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:53.231073  638584 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:53.231102  638584 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:53.231116  638584 node_conditions.go:105] duration metric: took 3.327789ms to run NodePressure ...
	I1025 10:21:53.231127  638584 start.go:241] waiting for startup goroutines ...
	I1025 10:21:53.231134  638584 start.go:246] waiting for cluster config update ...
	I1025 10:21:53.231145  638584 start.go:255] writing updated cluster config ...
	I1025 10:21:53.231433  638584 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:53.235996  638584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:53.239628  638584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.246519  638584 pod_ready.go:94] pod "coredns-66bc5c9577-545dp" is "Ready"
	I1025 10:21:54.246556  638584 pod_ready.go:86] duration metric: took 1.006903697s for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.249657  638584 pod_ready.go:83] waiting for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.254284  638584 pod_ready.go:94] pod "etcd-embed-certs-683681" is "Ready"
	I1025 10:21:54.254351  638584 pod_ready.go:86] duration metric: took 4.629709ms for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.256768  638584 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.261130  638584 pod_ready.go:94] pod "kube-apiserver-embed-certs-683681" is "Ready"
	I1025 10:21:54.261157  638584 pod_ready.go:86] duration metric: took 4.363563ms for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.263224  638584 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.443581  638584 pod_ready.go:94] pod "kube-controller-manager-embed-certs-683681" is "Ready"
	I1025 10:21:54.443610  638584 pod_ready.go:86] duration metric: took 180.36054ms for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.644082  638584 pod_ready.go:83] waiting for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.044226  638584 pod_ready.go:94] pod "kube-proxy-dbks6" is "Ready"
	I1025 10:21:55.044259  638584 pod_ready.go:86] duration metric: took 400.15124ms for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.243900  638584 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643886  638584 pod_ready.go:94] pod "kube-scheduler-embed-certs-683681" is "Ready"
	I1025 10:21:55.643919  638584 pod_ready.go:86] duration metric: took 399.992242ms for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643935  638584 pod_ready.go:40] duration metric: took 2.407895178s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:55.697477  638584 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:55.699399  638584 out.go:179] * Done! kubectl is now configured to use "embed-certs-683681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.587272916Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591016928Z" level=info msg="Created container ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=2cdc36db-af74-402d-823a-e985d95d582f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591267616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591292385Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591310547Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.591762895Z" level=info msg="Starting container: ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc" id=1a85c2d4-a979-4e1e-a5bd-3655a0b55c45 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.594184544Z" level=info msg="Started container" PID=1723 containerID=ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper id=1a85c2d4-a979-4e1e-a5bd-3655a0b55c45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a196dde484e7e357d954640a68a59c9b6256c089007961aac9fa38cccb2da18
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.596356751Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.596386294Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.596413828Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.601562104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:21:26 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:26.601591359Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:21:27 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:27.547144922Z" level=info msg="Removing container: aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76" id=829e32dc-17f7-4b42-a90c-838524554323 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:27 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:27.558373893Z" level=info msg="Removed container aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=829e32dc-17f7-4b42-a90c-838524554323 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.45523542Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f6eeebe-3517-47f5-8b42-29890f379a85 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.456510216Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5ec3dc0d-e2ee-423d-a981-f3023fd210d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.458035169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=7e43b90e-72fd-4da8-a0bb-1631f8d733e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.458186641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.464745392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.465305504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.502757488Z" level=info msg="Created container 1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=7e43b90e-72fd-4da8-a0bb-1631f8d733e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.503602952Z" level=info msg="Starting container: 1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15" id=7fbc00af-4480-4082-a7b4-3509e9369c53 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.506064007Z" level=info msg="Started container" PID=1795 containerID=1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper id=7fbc00af-4480-4082-a7b4-3509e9369c53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a196dde484e7e357d954640a68a59c9b6256c089007961aac9fa38cccb2da18
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.602853051Z" level=info msg="Removing container: ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc" id=c56ed25c-8d92-42f5-b04d-17d477ac91cc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:21:48 default-k8s-diff-port-767846 crio[562]: time="2025-10-25T10:21:48.613274469Z" level=info msg="Removed container ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p/dashboard-metrics-scraper" id=c56ed25c-8d92-42f5-b04d-17d477ac91cc name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	1c249100b1cdb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   7a196dde484e7       dashboard-metrics-scraper-6ffb444bf9-vbr9p             kubernetes-dashboard
	fb5a07f67d104       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   083689c88ea02       kubernetes-dashboard-855c9754f9-wzpft                  kubernetes-dashboard
	24856409af1d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Running             storage-provisioner         1                   0bf2c373fa8bc       storage-provisioner                                    kube-system
	fed6cef8fa113       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   8da6339621c64       busybox                                                default
	09e2459273fad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   d2a2600813b0c       kindnet-vcqs2                                          kube-system
	2f0454c1c473b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   0bf2c373fa8bc       storage-provisioner                                    kube-system
	ca8e9fdba848b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   6809cb7f0bba0       coredns-66bc5c9577-rznxv                               kube-system
	040afacf3651f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   292c406022822       kube-proxy-cvm5c                                       kube-system
	5651b5355eb31       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   5af95fde8cdc4       etcd-default-k8s-diff-port-767846                      kube-system
	4a3076ac0e1e7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   c2666612f8730       kube-controller-manager-default-k8s-diff-port-767846   kube-system
	19816f19d39c5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   d8062c2d8805f       kube-scheduler-default-k8s-diff-port-767846            kube-system
	93e7c0501a9a9       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   46c18b51b3782       kube-apiserver-default-k8s-diff-port-767846            kube-system
	
	
	==> coredns [ca8e9fdba848b911be60a6b3b46d5c7a4141cbb69f8d11609a1d58392aeee7c1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40328 - 50857 "HINFO IN 9163499815538976087.2896879621534406158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.108690778s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-767846
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-767846
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=default-k8s-diff-port-767846
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-767846
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:44 +0000   Sat, 25 Oct 2025 10:20:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-767846
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                993ff0b7-fce7-4433-b2bb-acc59f575ba5
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-rznxv                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-767846                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-vcqs2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-767846             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-767846    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-cvm5c                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-767846             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vbr9p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wzpft                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-767846 event: Registered Node default-k8s-diff-port-767846 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-767846 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-767846 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-767846 event: Registered Node default-k8s-diff-port-767846 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1] <==
	{"level":"warn","ts":"2025-10-25T10:21:15.388722Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.693598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:21:15.389000Z","caller":"traceutil/trace.go:172","msg":"trace[1265438700] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:458; }","duration":"145.976612ms","start":"2025-10-25T10:21:15.243009Z","end":"2025-10-25T10:21:15.388986Z","steps":["trace[1265438700] 'agreement among raft nodes before linearized reading'  (duration: 113.85484ms)","trace[1265438700] 'range keys from in-memory index tree'  (duration: 31.820531ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.389121Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.555005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-767846.1871b4c14445cdb9\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-10-25T10:21:15.389168Z","caller":"traceutil/trace.go:172","msg":"trace[1125304537] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-767846.1871b4c14445cdb9; range_end:; response_count:1; response_revision:460; }","duration":"121.607993ms","start":"2025-10-25T10:21:15.267550Z","end":"2025-10-25T10:21:15.389158Z","steps":["trace[1125304537] 'agreement among raft nodes before linearized reading'  (duration: 121.466738ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.526678Z","caller":"traceutil/trace.go:172","msg":"trace[626839294] linearizableReadLoop","detail":"{readStateIndex:491; appliedIndex:491; }","duration":"126.89204ms","start":"2025-10-25T10:21:15.399749Z","end":"2025-10-25T10:21:15.526641Z","steps":["trace[626839294] 'read index received'  (duration: 126.881605ms)","trace[626839294] 'applied index is now lower than readState.Index'  (duration: 8.668µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.560301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.520116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-10-25T10:21:15.560470Z","caller":"traceutil/trace.go:172","msg":"trace[941068557] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:461; }","duration":"160.708342ms","start":"2025-10-25T10:21:15.399740Z","end":"2025-10-25T10:21:15.560449Z","steps":["trace[941068557] 'agreement among raft nodes before linearized reading'  (duration: 127.044473ms)","trace[941068557] 'range keys from in-memory index tree'  (duration: 33.34065ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:15.560527Z","caller":"traceutil/trace.go:172","msg":"trace[102056638] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"156.010479ms","start":"2025-10-25T10:21:15.404499Z","end":"2025-10-25T10:21:15.560510Z","steps":["trace[102056638] 'process raft request'  (duration: 155.955149ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.560777Z","caller":"traceutil/trace.go:172","msg":"trace[1661513789] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"161.2039ms","start":"2025-10-25T10:21:15.399556Z","end":"2025-10-25T10:21:15.560760Z","steps":["trace[1661513789] 'process raft request'  (duration: 127.234079ms)","trace[1661513789] 'compare'  (duration: 33.40684ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:15.560871Z","caller":"traceutil/trace.go:172","msg":"trace[827880719] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"159.886445ms","start":"2025-10-25T10:21:15.400971Z","end":"2025-10-25T10:21:15.560857Z","steps":["trace[827880719] 'process raft request'  (duration: 159.40968ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.688574Z","caller":"traceutil/trace.go:172","msg":"trace[936086823] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:494; }","duration":"116.398024ms","start":"2025-10-25T10:21:15.572145Z","end":"2025-10-25T10:21:15.688543Z","steps":["trace[936086823] 'read index received'  (duration: 116.382763ms)","trace[936086823] 'applied index is now lower than readState.Index'  (duration: 12.749µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.740073Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.903144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/cluster-admin\" limit:1 ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2025-10-25T10:21:15.740139Z","caller":"traceutil/trace.go:172","msg":"trace[669493993] range","detail":"{range_begin:/registry/clusterroles/cluster-admin; range_end:; response_count:1; response_revision:464; }","duration":"167.986884ms","start":"2025-10-25T10:21:15.572135Z","end":"2025-10-25T10:21:15.740122Z","steps":["trace[669493993] 'agreement among raft nodes before linearized reading'  (duration: 116.482321ms)","trace[669493993] 'range keys from in-memory index tree'  (duration: 51.270223ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:21:15.740211Z","caller":"traceutil/trace.go:172","msg":"trace[423577085] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"164.616026ms","start":"2025-10-25T10:21:15.575585Z","end":"2025-10-25T10:21:15.740201Z","steps":["trace[423577085] 'process raft request'  (duration: 164.560143ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:15.740200Z","caller":"traceutil/trace.go:172","msg":"trace[879190047] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"170.709428ms","start":"2025-10-25T10:21:15.569462Z","end":"2025-10-25T10:21:15.740171Z","steps":["trace[879190047] 'process raft request'  (duration: 119.113975ms)","trace[879190047] 'compare'  (duration: 51.430181ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:15.740391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.624921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2025-10-25T10:21:15.740434Z","caller":"traceutil/trace.go:172","msg":"trace[607198520] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:466; }","duration":"167.678646ms","start":"2025-10-25T10:21:15.572743Z","end":"2025-10-25T10:21:15.740422Z","steps":["trace[607198520] 'agreement among raft nodes before linearized reading'  (duration: 167.513996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:16.080869Z","caller":"traceutil/trace.go:172","msg":"trace[713184015] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:501; }","duration":"251.623203ms","start":"2025-10-25T10:21:15.829216Z","end":"2025-10-25T10:21:16.080839Z","steps":["trace[713184015] 'read index received'  (duration: 251.613807ms)","trace[713184015] 'applied index is now lower than readState.Index'  (duration: 8.15µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T10:21:16.081544Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.951903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T10:21:16.082429Z","caller":"traceutil/trace.go:172","msg":"trace[2103869838] range","detail":"{range_begin:/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:471; }","duration":"257.920856ms","start":"2025-10-25T10:21:15.824486Z","end":"2025-10-25T10:21:16.082407Z","steps":["trace[2103869838] 'agreement among raft nodes before linearized reading'  (duration: 256.476122ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:16.082264Z","caller":"traceutil/trace.go:172","msg":"trace[1942929163] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"253.880677ms","start":"2025-10-25T10:21:15.828365Z","end":"2025-10-25T10:21:16.082246Z","steps":["trace[1942929163] 'process raft request'  (duration: 252.650208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:21:16.082625Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"244.573435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-767846.1871b4c14445964c\" limit:1 ","response":"range_response_count:1 size:797"}
	{"level":"warn","ts":"2025-10-25T10:21:16.082431Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.027449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"info","ts":"2025-10-25T10:21:16.082848Z","caller":"traceutil/trace.go:172","msg":"trace[1376153136] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:472; }","duration":"248.447742ms","start":"2025-10-25T10:21:15.834381Z","end":"2025-10-25T10:21:16.082828Z","steps":["trace[1376153136] 'agreement among raft nodes before linearized reading'  (duration: 247.923851ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T10:21:16.082655Z","caller":"traceutil/trace.go:172","msg":"trace[614345126] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-767846.1871b4c14445964c; range_end:; response_count:1; response_revision:472; }","duration":"244.606866ms","start":"2025-10-25T10:21:15.838038Z","end":"2025-10-25T10:21:16.082645Z","steps":["trace[614345126] 'agreement among raft nodes before linearized reading'  (duration: 244.512212ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:22:06 up  2:04,  0 user,  load average: 4.96, 5.06, 5.93
	Linux default-k8s-diff-port-767846 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [09e2459273fad439995d9ffdb8adfd372d7c377970843fbc1f657d31bc15c555] <==
	I1025 10:21:16.367670       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:21:16.368094       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 10:21:16.368709       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:21:16.368832       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:21:16.368910       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:21:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:21:16.571130       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:21:16.571557       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:21:16.571588       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:21:16.571787       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:21:16.972273       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:21:16.972305       1 metrics.go:72] Registering metrics
	I1025 10:21:16.972390       1 controller.go:711] "Syncing nftables rules"
	I1025 10:21:26.571550       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:26.571614       1 main.go:301] handling current node
	I1025 10:21:36.577609       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:36.577646       1 main.go:301] handling current node
	I1025 10:21:46.571954       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:46.572015       1 main.go:301] handling current node
	I1025 10:21:56.572421       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 10:21:56.572474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b] <==
	I1025 10:21:14.008433       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:21:14.008613       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:21:14.008643       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:21:14.009108       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:21:14.009436       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:21:14.009484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:21:14.009518       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:21:14.013029       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:21:14.013145       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:21:14.014308       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:21:14.045953       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:21:14.065348       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1025 10:21:14.164149       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:21:14.469480       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:21:14.702373       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:21:15.241399       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:21:15.403851       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:21:15.796249       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:21:16.096025       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:21:16.175882       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.102.150"}
	I1025 10:21:16.194593       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.238.69"}
	I1025 10:21:18.599981       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:18.799578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:18.799578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:18.899024       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57] <==
	I1025 10:21:18.341186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:21:18.344690       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:21:18.345101       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:21:18.346291       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:21:18.346310       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:21:18.346342       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:21:18.346370       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:21:18.346497       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:21:18.346517       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:21:18.346558       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:21:18.346639       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:21:18.346710       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-767846"
	I1025 10:21:18.346760       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:21:18.351785       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:21:18.351851       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:21:18.351911       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:21:18.351922       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:21:18.351850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:18.351928       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:21:18.354064       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:18.366257       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:21:18.369614       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:21:18.371889       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:21:18.373965       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:21:18.376367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [040afacf3651f3df296c0fb9e05451bd6f2a7e10325871a10ea903d99da7a876] <==
	I1025 10:21:15.605434       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:21:15.671393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:21:15.772537       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:21:15.772587       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 10:21:15.772694       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:21:15.829911       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:21:15.829980       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:21:15.841381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:21:15.841794       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:21:15.841871       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:15.843386       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:21:15.843419       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:21:15.843506       1 config.go:200] "Starting service config controller"
	I1025 10:21:15.843518       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:21:15.843498       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:21:15.843542       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:21:15.843794       1 config.go:309] "Starting node config controller"
	I1025 10:21:15.843815       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:21:15.843823       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:21:15.943698       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:21:15.943741       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:21:15.943761       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7] <==
	I1025 10:21:13.315812       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:21:14.722073       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:21:14.722134       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:14.867401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:21:14.867457       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:21:14.867599       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:21:14.867658       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:21:14.867817       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:21:14.867915       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:21:14.867916       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:21:14.867940       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:21:14.968001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:21:14.968061       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:21:14.968133       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:21:16 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:16.488793     722 scope.go:117] "RemoveContainer" containerID="2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.071899     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb132c7c-4000-49c6-a124-5f449d55cb74-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vbr9p\" (UID: \"cb132c7c-4000-49c6-a124-5f449d55cb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.071976     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9sd\" (UniqueName: \"kubernetes.io/projected/cb132c7c-4000-49c6-a124-5f449d55cb74-kube-api-access-nd9sd\") pod \"dashboard-metrics-scraper-6ffb444bf9-vbr9p\" (UID: \"cb132c7c-4000-49c6-a124-5f449d55cb74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.072015     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f628496a-a0ef-4646-bd5b-6469e37ccbd4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-wzpft\" (UID: \"f628496a-a0ef-4646-bd5b-6469e37ccbd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzpft"
	Oct 25 10:21:19 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:19.072082     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dv7l\" (UniqueName: \"kubernetes.io/projected/f628496a-a0ef-4646-bd5b-6469e37ccbd4-kube-api-access-9dv7l\") pod \"kubernetes-dashboard-855c9754f9-wzpft\" (UID: \"f628496a-a0ef-4646-bd5b-6469e37ccbd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzpft"
	Oct 25 10:21:23 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:23.603729     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzpft" podStartSLOduration=1.899663919 podStartE2EDuration="5.603684029s" podCreationTimestamp="2025-10-25 10:21:18 +0000 UTC" firstStartedPulling="2025-10-25 10:21:19.294796888 +0000 UTC m=+7.947597945" lastFinishedPulling="2025-10-25 10:21:22.99881699 +0000 UTC m=+11.651618055" observedRunningTime="2025-10-25 10:21:23.60343828 +0000 UTC m=+12.256239357" watchObservedRunningTime="2025-10-25 10:21:23.603684029 +0000 UTC m=+12.256502900"
	Oct 25 10:21:26 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:26.536898     722 scope.go:117] "RemoveContainer" containerID="aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76"
	Oct 25 10:21:27 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:27.542237     722 scope.go:117] "RemoveContainer" containerID="aae61be449204dff95396d9dbc0f4ba5dc97b70b07826043e04345a10d421a76"
	Oct 25 10:21:27 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:27.542395     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:27 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:27.542728     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:28 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:28.547571     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:28 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:28.547779     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:34 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:34.475025     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:34 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:34.475304     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:48.454577     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:48.601024     722 scope.go:117] "RemoveContainer" containerID="ca8f4fcd2a56062f5c1f17ebc13c4f6bf06374fe033976033296714be035dadc"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:48.601280     722 scope.go:117] "RemoveContainer" containerID="1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	Oct 25 10:21:48 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:48.601568     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:21:54 default-k8s-diff-port-767846 kubelet[722]: I1025 10:21:54.474701     722 scope.go:117] "RemoveContainer" containerID="1c249100b1cdb4e0f46f4f1eee7d35d1ec8fc6f35a9262f42b142aeb9b478f15"
	Oct 25 10:21:54 default-k8s-diff-port-767846 kubelet[722]: E1025 10:21:54.474935     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vbr9p_kubernetes-dashboard(cb132c7c-4000-49c6-a124-5f449d55cb74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vbr9p" podUID="cb132c7c-4000-49c6-a124-5f449d55cb74"
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:22:01 default-k8s-diff-port-767846 kubelet[722]: I1025 10:22:01.401339     722 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:22:01 default-k8s-diff-port-767846 systemd[1]: kubelet.service: Consumed 1.858s CPU time.
	
	
	==> kubernetes-dashboard [fb5a07f67d104ece5c4e59cf02a6acaa20151d01116039e6818d51c497d4e740] <==
	2025/10/25 10:21:23 Starting overwatch
	2025/10/25 10:21:23 Using namespace: kubernetes-dashboard
	2025/10/25 10:21:23 Using in-cluster config to connect to apiserver
	2025/10/25 10:21:23 Using secret token for csrf signing
	2025/10/25 10:21:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:21:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:21:23 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:21:23 Generating JWE encryption key
	2025/10/25 10:21:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:21:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:21:23 Initializing JWE encryption key from synchronized object
	2025/10/25 10:21:23 Creating in-cluster Sidecar client
	2025/10/25 10:21:23 Serving insecurely on HTTP port: 9090
	2025/10/25 10:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:21:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [24856409af1d28dfd7c81bbb566035594b19ffe4e449271ef2769f0a51f01272] <==
	W1025 10:21:42.004025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.006843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:44.011396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.014524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:46.018737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.022535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:48.026933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.030059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:50.034512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.037578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.042017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.045944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.052531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.056181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.061122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:58.064712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:58.072193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:00.075581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:00.080468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:02.084829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:02.089721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:04.093500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:04.099883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:06.102919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:06.107424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [2f0454c1c473b531c3c2ce0e0e81352e26d1c0cd6888ff3fe87bd24e68ae0248] <==
	I1025 10:21:15.735934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:21:15.737849       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846: exit status 2 (382.791471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (276.775036ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-683681 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-683681 describe deploy/metrics-server -n kube-system: exit status 1 (67.926568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-683681 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-683681
helpers_test.go:243: (dbg) docker inspect embed-certs-683681:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878",
	        "Created": "2025-10-25T10:21:16.235046016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 640272,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:21:16.284680009Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/hostname",
	        "HostsPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/hosts",
	        "LogPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878-json.log",
	        "Name": "/embed-certs-683681",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-683681:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-683681",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878",
	                "LowerDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-683681",
	                "Source": "/var/lib/docker/volumes/embed-certs-683681/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-683681",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-683681",
	                "name.minikube.sigs.k8s.io": "embed-certs-683681",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3de8f0e2d69f4865f901f94cc1d69449aaa380094fa8fd5ef0fe15150a7bfb70",
	            "SandboxKey": "/var/run/docker/netns/3de8f0e2d69f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-683681": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:c0:5a:18:3a:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afda803609319b40fede74121fd584f53a0a22be2a797d9c1be1e1370a5a8dff",
	                    "EndpointID": "26d9bcad397d54bd8eaeed1eb07af954f99ea673d22f7649f2041b83b3fa0e8b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-683681",
	                        "664aed4a01f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-683681 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-683681 logs -n 25: (1.194688151s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-767846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p default-k8s-diff-port-767846 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ addons  │ enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ start   │ -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ newest-cni-667966 image list --format=json                                                                                                                                                                                                    │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:20 UTC │
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                                                                                          │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                                                                                               │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                                                                                     │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                                                                                                    │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                                                                                          │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-767846 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-767846 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                                                                                          │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:21:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:21:10.148251  638584 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:10.148605  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148630  638584 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:10.148638  638584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:10.148938  638584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:21:10.149711  638584 out.go:368] Setting JSON to false
	I1025 10:21:10.151634  638584 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7419,"bootTime":1761380251,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:21:10.151786  638584 start.go:141] virtualization: kvm guest
	I1025 10:21:10.154262  638584 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:21:10.155881  638584 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:21:10.155931  638584 notify.go:220] Checking for updates...
	I1025 10:21:10.158857  638584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:21:10.160458  638584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:10.161966  638584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:21:10.163444  638584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:21:10.165074  638584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:21:10.167201  638584 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167413  638584 config.go:182] Loaded profile config "no-preload-899665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:10.167543  638584 config.go:182] Loaded profile config "old-k8s-version-714798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:21:10.167677  638584 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:21:10.195271  638584 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:21:10.195411  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.276912  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.253206883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.277024  638584 docker.go:318] overlay module found
	I1025 10:21:10.278915  638584 out.go:179] * Using the docker driver based on user configuration
	I1025 10:21:10.280189  638584 start.go:305] selected driver: docker
	I1025 10:21:10.280210  638584 start.go:925] validating driver "docker" against <nil>
	I1025 10:21:10.280228  638584 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:21:10.280870  638584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:10.351945  638584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 10:21:10.340512633 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:21:10.352169  638584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:21:10.352450  638584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:10.354600  638584 out.go:179] * Using Docker driver with root privileges
	I1025 10:21:10.356067  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:10.356119  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:10.356128  638584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:21:10.356206  638584 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:10.359204  638584 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:21:10.360475  638584 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:21:10.361884  638584 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:21:10.363223  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:10.363261  638584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:21:10.363282  638584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:21:10.363300  638584 cache.go:58] Caching tarball of preloaded images
	I1025 10:21:10.363426  638584 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:21:10.363440  638584 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:21:10.363573  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:10.363603  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json: {Name:mk7d7cb38e92abe91e5617ae8c0cde69820d256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:10.401470  638584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:21:10.401501  638584 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:21:10.401524  638584 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:21:10.401557  638584 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:21:10.401681  638584 start.go:364] duration metric: took 100.361µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:21:10.401719  638584 start.go:93] Provisioning new machine with config: &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:10.401811  638584 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:21:09.341512  636484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:09.341546  636484 machine.go:96] duration metric: took 4.679953004s to provisionDockerMachine
	I1025 10:21:09.341561  636484 start.go:293] postStartSetup for "default-k8s-diff-port-767846" (driver="docker")
	I1025 10:21:09.341576  636484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:09.341718  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:09.341793  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.365110  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.484377  636484 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:09.489414  636484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:09.489442  636484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:09.489453  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:09.489516  636484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:09.489612  636484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:09.489735  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:09.499262  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:09.521134  636484 start.go:296] duration metric: took 179.55364ms for postStartSetup
	I1025 10:21:09.521229  636484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:09.521289  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.546865  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.651523  636484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:09.656840  636484 fix.go:56] duration metric: took 5.400890226s for fixHost
	I1025 10:21:09.656881  636484 start.go:83] releasing machines lock for "default-k8s-diff-port-767846", held for 5.400960044s
	I1025 10:21:09.656963  636484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-767846
	I1025 10:21:09.678291  636484 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:09.678335  636484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:09.678385  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.678417  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:09.699727  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.699888  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:09.801273  636484 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:09.869861  636484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:09.912691  636484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:09.918693  636484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:09.918789  636484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:09.929691  636484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:21:09.929723  636484 start.go:495] detecting cgroup driver to use...
	I1025 10:21:09.929768  636484 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:09.929846  636484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:09.947292  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:09.962309  636484 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:09.962380  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:09.981742  636484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:09.997805  636484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:10.091545  636484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:10.191661  636484 docker.go:234] disabling docker service ...
	I1025 10:21:10.191739  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:10.211470  636484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:10.232902  636484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:10.343594  636484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:10.458272  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:10.475115  636484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:10.492690  636484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:10.492760  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.505848  636484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:10.505908  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.517567  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.531478  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.545455  636484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:10.557702  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.571143  636484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.582240  636484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:10.593233  636484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:10.602910  636484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:10.612119  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:10.705561  636484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:10.849205  636484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:10.849299  636484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:10.853987  636484 start.go:563] Will wait 60s for crictl version
	I1025 10:21:10.854061  636484 ssh_runner.go:195] Run: which crictl
	I1025 10:21:10.858281  636484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:10.891437  636484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:10.891545  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.928397  636484 ssh_runner.go:195] Run: crio --version
	I1025 10:21:10.968448  636484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:10.969831  636484 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-767846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.988308  636484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:10.993548  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.007467  636484 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:11.007638  636484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.007713  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.050081  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.050104  636484 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:11.050159  636484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:11.079408  636484 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:11.079432  636484 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:11.079440  636484 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1025 10:21:11.079542  636484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-767846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:11.079604  636484 ssh_runner.go:195] Run: crio config
	I1025 10:21:11.135081  636484 cni.go:84] Creating CNI manager for ""
	I1025 10:21:11.135104  636484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:11.135125  636484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:11.135152  636484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-767846 NodeName:default-k8s-diff-port-767846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:11.135274  636484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-767846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:11.135376  636484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:11.146044  636484 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:11.146127  636484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:11.157527  636484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 10:21:11.173105  636484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:11.194054  636484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1025 10:21:11.210598  636484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:11.215039  636484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:11.228199  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:11.315547  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:11.344889  636484 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846 for IP: 192.168.103.2
	I1025 10:21:11.344914  636484 certs.go:195] generating shared ca certs ...
	I1025 10:21:11.344936  636484 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:11.345096  636484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:11.345147  636484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:11.345159  636484 certs.go:257] generating profile certs ...
	I1025 10:21:11.345283  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/client.key
	I1025 10:21:11.345382  636484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key.0fbb729d
	I1025 10:21:11.345433  636484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key
	I1025 10:21:11.345576  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:11.345621  636484 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:11.345634  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:11.345661  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:11.345688  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:11.345716  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:11.345768  636484 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:11.346665  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:11.371779  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:11.395674  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:11.420943  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:11.450225  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:21:11.471921  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:21:11.491964  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:11.513657  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/default-k8s-diff-port-767846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:11.539802  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:11.564482  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:11.585472  636484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:11.605762  636484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:11.620550  636484 ssh_runner.go:195] Run: openssl version
	I1025 10:21:11.628742  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:11.640494  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645456  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.645535  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:11.681821  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:11.692404  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:11.702722  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707367  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.707434  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:11.744550  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:11.754748  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:11.765670  636484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770501  636484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.770568  636484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:11.806437  636484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:11.816622  636484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:11.821750  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:21:11.869084  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:21:11.918865  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:21:11.967891  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:21:12.023868  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:21:12.087958  636484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:21:12.133903  636484 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-767846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-767846 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:12.133995  636484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:12.134057  636484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:12.176249  636484 cri.go:89] found id: "5651b5355eb316ad91569abe8d79084a109bfb7f5e3317226217acc032d02de1"
	I1025 10:21:12.176277  636484 cri.go:89] found id: "4a3076ac0e1e7cab1ae1e3436bd70e3c3b3965b186f842a7e0c0d524505d0c57"
	I1025 10:21:12.176284  636484 cri.go:89] found id: "19816f19d39c5773a667353841a1802f9e8d4a9493ed76177e3cffba9eb45dd7"
	I1025 10:21:12.176289  636484 cri.go:89] found id: "93e7c0501a9a92272de292874e804fe8724d5cd8097e77aa3924e634b8f8d63b"
	I1025 10:21:12.176294  636484 cri.go:89] found id: ""
	I1025 10:21:12.176379  636484 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:21:12.191582  636484 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:21:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:21:12.191656  636484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:12.201840  636484 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:21:12.201870  636484 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:21:12.201918  636484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:21:12.211065  636484 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:21:12.211910  636484 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-767846" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.212424  636484 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-767846" cluster setting kubeconfig missing "default-k8s-diff-port-767846" context setting]
	I1025 10:21:12.212991  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.214595  636484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:21:12.225309  636484 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 10:21:12.225361  636484 kubeadm.go:601] duration metric: took 23.484211ms to restartPrimaryControlPlane
	I1025 10:21:12.225372  636484 kubeadm.go:402] duration metric: took 91.480993ms to StartCluster
	I1025 10:21:12.225394  636484 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.225489  636484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:12.226739  636484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:12.227039  636484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:12.227167  636484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:12.227262  636484 config.go:182] Loaded profile config "default-k8s-diff-port-767846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:12.227271  636484 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227291  636484 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.227299  636484 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:21:12.227297  636484 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-767846"
	I1025 10:21:12.227332  636484 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-767846"
	I1025 10:21:12.227339  636484 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-767846"
	W1025 10:21:12.227342  636484 addons.go:247] addon dashboard should already be in state true
	I1025 10:21:12.227353  636484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-767846"
	I1025 10:21:12.227367  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227371  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.227806  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227847  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.227905  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.232961  636484 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:12.234572  636484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:12.260042  636484 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:21:12.260116  636484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:12.261263  636484 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-767846"
	W1025 10:21:12.261282  636484 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:21:12.261305  636484 host.go:66] Checking if "default-k8s-diff-port-767846" exists ...
	I1025 10:21:12.261728  636484 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-767846 --format={{.State.Status}}
	I1025 10:21:12.262059  636484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.262078  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:12.262129  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.265414  636484 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:21:09.268544  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:11.766755  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	W1025 10:21:09.831833  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:12.337504  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:12.266825  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:21:12.266852  636484 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:21:12.266926  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.302238  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.306595  636484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.306701  636484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:12.306633  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.307467  636484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-767846
	I1025 10:21:12.337295  636484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/default-k8s-diff-port-767846/id_rsa Username:docker}
	I1025 10:21:12.414307  636484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:12.436001  636484 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:12.436611  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:21:12.436644  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:21:12.451080  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:12.456814  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:21:12.456844  636484 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:21:12.465383  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:12.479456  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:21:12.479485  636484 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:21:12.501005  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:21:12.501032  636484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:21:12.526625  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:21:12.526672  636484 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:21:12.553034  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:21:12.553076  636484 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:21:12.573193  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:21:12.573227  636484 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:21:12.590613  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:21:12.590687  636484 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:21:12.606035  636484 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:12.606071  636484 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:21:12.624851  636484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:21:13.931289  636484 node_ready.go:49] node "default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:13.931333  636484 node_ready.go:38] duration metric: took 1.495294194s for node "default-k8s-diff-port-767846" to be "Ready" ...
	I1025 10:21:13.931355  636484 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:13.931415  636484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:10.403779  638584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:21:10.404001  638584 start.go:159] libmachine.API.Create for "embed-certs-683681" (driver="docker")
	I1025 10:21:10.404030  638584 client.go:168] LocalClient.Create starting
	I1025 10:21:10.404114  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem
	I1025 10:21:10.404167  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404189  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404267  638584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem
	I1025 10:21:10.404309  638584 main.go:141] libmachine: Decoding PEM data...
	I1025 10:21:10.404335  638584 main.go:141] libmachine: Parsing certificate...
	I1025 10:21:10.404773  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:21:10.426055  638584 cli_runner.go:211] docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:21:10.426150  638584 network_create.go:284] running [docker network inspect embed-certs-683681] to gather additional debugging logs...
	I1025 10:21:10.426175  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681
	W1025 10:21:10.450027  638584 cli_runner.go:211] docker network inspect embed-certs-683681 returned with exit code 1
	I1025 10:21:10.450066  638584 network_create.go:287] error running [docker network inspect embed-certs-683681]: docker network inspect embed-certs-683681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-683681 not found
	I1025 10:21:10.450079  638584 network_create.go:289] output of [docker network inspect embed-certs-683681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-683681 not found
	
	** /stderr **
	I1025 10:21:10.450215  638584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:10.472971  638584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
	I1025 10:21:10.473601  638584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5189eca196b1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:42:d7:a0:fe:65} reservation:<nil>}
	I1025 10:21:10.474232  638584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a58b5f36975c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:4d:ae:71:f0:49} reservation:<nil>}
	I1025 10:21:10.474754  638584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8aca1f62a35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:65:a5:98:3f:04} reservation:<nil>}
	I1025 10:21:10.475283  638584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cc93092e09ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:73:0a:fa:f6:13} reservation:<nil>}
	I1025 10:21:10.475999  638584 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a03c50}
	I1025 10:21:10.476026  638584 network_create.go:124] attempt to create docker network embed-certs-683681 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 10:21:10.476083  638584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-683681 embed-certs-683681
	I1025 10:21:10.551427  638584 network_create.go:108] docker network embed-certs-683681 192.168.94.0/24 created
	I1025 10:21:10.551459  638584 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-683681" container
	I1025 10:21:10.551518  638584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:21:10.575731  638584 cli_runner.go:164] Run: docker volume create embed-certs-683681 --label name.minikube.sigs.k8s.io=embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:21:10.596450  638584 oci.go:103] Successfully created a docker volume embed-certs-683681
	I1025 10:21:10.596543  638584 cli_runner.go:164] Run: docker run --rm --name embed-certs-683681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --entrypoint /usr/bin/test -v embed-certs-683681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:21:11.043993  638584 oci.go:107] Successfully prepared a docker volume embed-certs-683681
	I1025 10:21:11.044039  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:11.044062  638584 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:21:11.044129  638584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:21:13.772552  624632 pod_ready.go:104] pod "coredns-5dd5756b68-k5644" is not "Ready", error: <nil>
	I1025 10:21:14.336599  624632 pod_ready.go:94] pod "coredns-5dd5756b68-k5644" is "Ready"
	I1025 10:21:14.336630  624632 pod_ready.go:86] duration metric: took 39.577109588s for pod "coredns-5dd5756b68-k5644" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.340650  624632 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.346235  624632 pod_ready.go:94] pod "etcd-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.346269  624632 pod_ready.go:86] duration metric: took 5.588309ms for pod "etcd-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.349654  624632 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.355198  624632 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.355230  624632 pod_ready.go:86] duration metric: took 5.550064ms for pod "kube-apiserver-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.359203  624632 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.515864  624632 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714798" is "Ready"
	I1025 10:21:14.515908  624632 pod_ready.go:86] duration metric: took 156.674255ms for pod "kube-controller-manager-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:14.679941  624632 pod_ready.go:83] waiting for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.064359  624632 pod_ready.go:94] pod "kube-proxy-kqg7q" is "Ready"
	I1025 10:21:15.064395  624632 pod_ready.go:86] duration metric: took 384.425103ms for pod "kube-proxy-kqg7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.264420  624632 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664469  624632 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714798" is "Ready"
	I1025 10:21:15.664501  624632 pod_ready.go:86] duration metric: took 400.048856ms for pod "kube-scheduler-old-k8s-version-714798" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:15.664517  624632 pod_ready.go:40] duration metric: took 40.910543454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:15.713277  624632 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 10:21:15.739862  624632 out.go:203] 
	W1025 10:21:15.783078  624632 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:21:15.791059  624632 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:21:15.796132  624632 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714798" cluster and "default" namespace by default
	I1025 10:21:15.245915  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.794706474s)
	I1025 10:21:15.246013  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.780553475s)
	I1025 10:21:16.201960  636484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.577043142s)
	I1025 10:21:16.202175  636484 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.270743207s)
	I1025 10:21:16.202205  636484 api_server.go:72] duration metric: took 3.975127965s to wait for apiserver process to appear ...
	I1025 10:21:16.202212  636484 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:16.202233  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.203931  636484 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-767846 addons enable metrics-server
	
	I1025 10:21:16.206179  636484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1025 10:21:14.831620  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:16.832274  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	I1025 10:21:16.207469  636484 addons.go:514] duration metric: took 3.980316596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:21:16.208161  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:21:16.208186  636484 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:21:16.702507  636484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1025 10:21:16.707281  636484 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1025 10:21:16.708497  636484 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:16.708529  636484 api_server.go:131] duration metric: took 506.309184ms to wait for apiserver health ...
	I1025 10:21:16.708542  636484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:16.712747  636484 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:16.712806  636484 system_pods.go:61] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.712819  636484 system_pods.go:61] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.712835  636484 system_pods.go:61] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.712845  636484 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.712859  636484 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.712874  636484 system_pods.go:61] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.712885  636484 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.712924  636484 system_pods.go:61] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.712936  636484 system_pods.go:74] duration metric: took 4.383599ms to wait for pod list to return data ...
	I1025 10:21:16.712948  636484 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:16.715673  636484 default_sa.go:45] found service account: "default"
	I1025 10:21:16.715694  636484 default_sa.go:55] duration metric: took 2.737037ms for default service account to be created ...
	I1025 10:21:16.715704  636484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:16.718943  636484 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:16.718978  636484 system_pods.go:89] "coredns-66bc5c9577-rznxv" [d7eae20c-8d39-4486-ab11-13675911180f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:16.718990  636484 system_pods.go:89] "etcd-default-k8s-diff-port-767846" [7612c238-ca0d-458d-a901-e96167590fc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:21:16.718997  636484 system_pods.go:89] "kindnet-vcqs2" [e41fd0fd-97c4-44ef-a645-cf0136340098] Running
	I1025 10:21:16.719005  636484 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-767846" [6eaa12a1-d4b6-4e96-81ed-18662e83034c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:21:16.719014  636484 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-767846" [7ffb55c8-db2f-4807-ace7-044a0c281f62] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:21:16.719034  636484 system_pods.go:89] "kube-proxy-cvm5c" [42278e98-5278-4efa-b484-ec73c16fc851] Running
	I1025 10:21:16.719042  636484 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-767846" [934d0b38-85ad-4c54-9e94-970177aa8cf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:21:16.719049  636484 system_pods.go:89] "storage-provisioner" [06a917da-eaa2-4b50-8c56-31a0ca7d14e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:16.719059  636484 system_pods.go:126] duration metric: took 3.347724ms to wait for k8s-apps to be running ...
	I1025 10:21:16.719070  636484 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:16.719120  636484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:16.733907  636484 system_svc.go:56] duration metric: took 14.825705ms WaitForService to wait for kubelet
	I1025 10:21:16.733943  636484 kubeadm.go:586] duration metric: took 4.506864504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:16.733968  636484 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:16.737241  636484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:16.737269  636484 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:16.737284  636484 node_conditions.go:105] duration metric: took 3.310515ms to run NodePressure ...
	I1025 10:21:16.737296  636484 start.go:241] waiting for startup goroutines ...
	I1025 10:21:16.737306  636484 start.go:246] waiting for cluster config update ...
	I1025 10:21:16.737329  636484 start.go:255] writing updated cluster config ...
	I1025 10:21:16.737611  636484 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:16.742069  636484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:16.748801  636484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:21:18.754620  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:16.111649  638584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-683681:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.067461823s)
	I1025 10:21:16.111690  638584 kic.go:203] duration metric: took 5.067622848s to extract preloaded images to volume ...
	W1025 10:21:16.111819  638584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:21:16.111866  638584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:21:16.111917  638584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:21:16.213690  638584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-683681 --name embed-certs-683681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-683681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-683681 --network embed-certs-683681 --ip 192.168.94.2 --volume embed-certs-683681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:21:16.572477  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Running}}
	I1025 10:21:16.594243  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.615558  638584 cli_runner.go:164] Run: docker exec embed-certs-683681 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:21:16.666536  638584 oci.go:144] the created container "embed-certs-683681" has a running status.
	I1025 10:21:16.666576  638584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa...
	I1025 10:21:16.809984  638584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:21:16.847757  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.871585  638584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:21:16.871610  638584 kic_runner.go:114] Args: [docker exec --privileged embed-certs-683681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:21:16.923128  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:16.943365  638584 machine.go:93] provisionDockerMachine start ...
	I1025 10:21:16.943479  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:16.966341  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:16.966647  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:16.966668  638584 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:21:16.967537  638584 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56448->127.0.0.1:33128: read: connection reset by peer
	I1025 10:21:20.116967  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.117014  638584 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:21:20.117084  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.137778  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.138008  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.138021  638584 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	W1025 10:21:19.333601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:21.831601  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:20.755645  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:22.755896  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:20.296939  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:21:20.297025  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:20.319104  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:20.319456  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:20.319479  638584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:21:20.480669  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:21:20.480704  638584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:21:20.480727  638584 ubuntu.go:190] setting up certificates
	I1025 10:21:20.480741  638584 provision.go:84] configureAuth start
	I1025 10:21:20.480822  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:20.505092  638584 provision.go:143] copyHostCerts
	I1025 10:21:20.505168  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:21:20.505184  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:21:20.505274  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:21:20.505416  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:21:20.505430  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:21:20.505476  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:21:20.505561  638584 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:21:20.505572  638584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:21:20.505630  638584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:21:20.505706  638584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:21:20.998585  638584 provision.go:177] copyRemoteCerts
	I1025 10:21:20.998661  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:21:20.998717  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.022129  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.137465  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:21:21.166388  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:21:21.193168  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:21:21.218286  638584 provision.go:87] duration metric: took 737.524136ms to configureAuth
	I1025 10:21:21.218330  638584 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:21:21.218553  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:21.218676  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.245915  638584 main.go:141] libmachine: Using SSH client type: native
	I1025 10:21:21.246236  638584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1025 10:21:21.246262  638584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:21:21.569413  638584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:21:21.569443  638584 machine.go:96] duration metric: took 4.626049853s to provisionDockerMachine
	I1025 10:21:21.569456  638584 client.go:171] duration metric: took 11.165417694s to LocalClient.Create
	I1025 10:21:21.569475  638584 start.go:167] duration metric: took 11.165474816s to libmachine.API.Create "embed-certs-683681"
	I1025 10:21:21.569486  638584 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:21:21.569498  638584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:21:21.569575  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:21:21.569622  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.594722  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.713328  638584 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:21:21.718538  638584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:21:21.718572  638584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:21:21.718589  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:21:21.718659  638584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:21:21.718787  638584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:21:21.718927  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:21:21.729097  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:21.759300  638584 start.go:296] duration metric: took 189.796063ms for postStartSetup
	I1025 10:21:21.759764  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.783751  638584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:21:21.784070  638584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:21.784113  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.807921  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.920186  638584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:21:21.927662  638584 start.go:128] duration metric: took 11.525830646s to createHost
	I1025 10:21:21.927699  638584 start.go:83] releasing machines lock for "embed-certs-683681", held for 11.526002458s
	I1025 10:21:21.927785  638584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:21:21.954049  638584 ssh_runner.go:195] Run: cat /version.json
	I1025 10:21:21.954096  638584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:21:21.954115  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.954188  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:21.978409  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:21.979872  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:22.092988  638584 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:22.175966  638584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:21:22.229838  638584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:21:22.236975  638584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:21:22.237063  638584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:21:22.280942  638584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:21:22.280974  638584 start.go:495] detecting cgroup driver to use...
	I1025 10:21:22.281010  638584 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:21:22.281075  638584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:21:22.306839  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:21:22.324489  638584 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:21:22.324560  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:21:22.350902  638584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:21:22.380086  638584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:21:22.506896  638584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:21:22.639498  638584 docker.go:234] disabling docker service ...
	I1025 10:21:22.639578  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:21:22.669198  638584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:21:22.689583  638584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:21:22.814437  638584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:21:22.917355  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:21:22.933471  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:21:22.951220  638584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:21:22.951289  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.964021  638584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:21:22.964092  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.974888  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.985640  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:22.996280  638584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:21:23.008692  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.019742  638584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.036857  638584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:21:23.048489  638584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:21:23.060801  638584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:21:23.072496  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:23.170641  638584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:21:24.036513  638584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:21:24.036615  638584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:21:24.042080  638584 start.go:563] Will wait 60s for crictl version
	I1025 10:21:24.042156  638584 ssh_runner.go:195] Run: which crictl
	I1025 10:21:24.047422  638584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:21:24.082362  638584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:21:24.082466  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.126861  638584 ssh_runner.go:195] Run: crio --version
	I1025 10:21:24.175837  638584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:21:24.178134  638584 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:21:24.201413  638584 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:21:24.207278  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.223512  638584 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:21:24.223683  638584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:21:24.223762  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.272966  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.272993  638584 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:21:24.273051  638584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:21:24.308934  638584 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:21:24.308965  638584 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:21:24.308975  638584 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:21:24.309097  638584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:21:24.309184  638584 ssh_runner.go:195] Run: crio config
	I1025 10:21:24.382243  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:24.382273  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:24.382297  638584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:21:24.382337  638584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:21:24.382524  638584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:21:24.382607  638584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:21:24.394268  638584 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:21:24.394387  638584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:21:24.406618  638584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:21:24.425969  638584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:21:24.449251  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:21:24.469582  638584 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:21:24.474973  638584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:21:24.490157  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:24.584608  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:24.614181  638584 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:21:24.614210  638584 certs.go:195] generating shared ca certs ...
	I1025 10:21:24.614233  638584 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.614424  638584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:21:24.614484  638584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:21:24.614496  638584 certs.go:257] generating profile certs ...
	I1025 10:21:24.614561  638584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:21:24.614588  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt with IP's: []
	I1025 10:21:24.860136  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt ...
	I1025 10:21:24.860185  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.crt: {Name:mk13866e786fa05bf2537b78a891e332bde8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860411  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key ...
	I1025 10:21:24.860433  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key: {Name:mk1337a45bd58216e46a47cf6f99440d10fa8b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.860559  638584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:21:24.860582  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 10:21:24.949254  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 ...
	I1025 10:21:24.949286  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81: {Name:mkc51a7d58b8866a38120d27081d78fd5d68e786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949518  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 ...
	I1025 10:21:24.949547  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81: {Name:mk94d386c4ce3ce7255b450634f934fa53890845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:24.949697  638584 certs.go:382] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt
	I1025 10:21:24.949820  638584 certs.go:386] copying /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81 -> /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key
	I1025 10:21:24.949908  638584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:21:24.949937  638584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt with IP's: []
	W1025 10:21:24.331982  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:26.831359  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:25.254917  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:27.754831  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:25.383221  638584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt ...
	I1025 10:21:25.383272  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt: {Name:mk46cb1967cb21d5d9aafce0c0335add4612cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383535  638584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key ...
	I1025 10:21:25.383560  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key: {Name:mkda2e4f8c6847061b7c83d0748f50b193d241a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:25.383814  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:21:25.383870  638584 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:21:25.383887  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:21:25.383917  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:21:25.383941  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:21:25.383962  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:21:25.384004  638584 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:21:25.384676  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:21:25.406810  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:21:25.429770  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:21:25.451189  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:21:25.475734  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:21:25.500538  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:21:25.522356  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:21:25.545290  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:21:25.567130  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:21:25.591445  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:21:25.616100  638584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:21:25.635723  638584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:21:25.650419  638584 ssh_runner.go:195] Run: openssl version
	I1025 10:21:25.657438  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:21:25.667296  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671566  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.671639  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:21:25.708223  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:21:25.718734  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:21:25.728930  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733604  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.733672  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:21:25.770496  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:21:25.780237  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:21:25.790312  638584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794835  638584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.794898  638584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:21:25.832583  638584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:21:25.842614  638584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:21:25.846872  638584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:21:25.846930  638584 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:21:25.847005  638584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:21:25.847068  638584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:21:25.875826  638584 cri.go:89] found id: ""
	I1025 10:21:25.875903  638584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:21:25.885163  638584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:21:25.894136  638584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:21:25.894192  638584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:21:25.903706  638584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:21:25.903732  638584 kubeadm.go:157] found existing configuration files:
	
	I1025 10:21:25.903784  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:21:25.913301  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:21:25.913384  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:21:25.923343  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:21:25.932490  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:21:25.932550  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:21:25.941477  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.950962  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:21:25.951028  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:21:25.959533  638584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:21:25.968524  638584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:21:25.968595  638584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:21:25.977380  638584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:21:26.045566  638584 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:21:26.120440  638584 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 10:21:29.331743  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:31.831906  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:30.254936  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:32.256411  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.665150  638584 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:21:36.665238  638584 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:21:36.665366  638584 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:21:36.665424  638584 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:21:36.665455  638584 kubeadm.go:318] OS: Linux
	I1025 10:21:36.665528  638584 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:21:36.665640  638584 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:21:36.665711  638584 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:21:36.665755  638584 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:21:36.665836  638584 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:21:36.665906  638584 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:21:36.665989  638584 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:21:36.666061  638584 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:21:36.666164  638584 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:21:36.666287  638584 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:21:36.666443  638584 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:21:36.666505  638584 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:21:36.668101  638584 out.go:252]   - Generating certificates and keys ...
	I1025 10:21:36.668178  638584 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:21:36.668239  638584 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:21:36.668297  638584 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:21:36.668408  638584 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:21:36.668487  638584 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:21:36.668570  638584 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:21:36.668632  638584 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:21:36.669282  638584 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669368  638584 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:21:36.669522  638584 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-683681 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 10:21:36.669602  638584 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:21:36.669681  638584 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:21:36.669732  638584 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:21:36.669795  638584 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:21:36.669856  638584 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:21:36.669922  638584 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:21:36.669975  638584 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:21:36.670054  638584 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:21:36.670110  638584 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:21:36.670198  638584 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:21:36.670268  638584 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:21:36.673336  638584 out.go:252]   - Booting up control plane ...
	I1025 10:21:36.673471  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:21:36.673585  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:21:36.673666  638584 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:21:36.673811  638584 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:21:36.673918  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:21:36.674052  638584 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:21:36.674150  638584 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:21:36.674197  638584 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:21:36.674448  638584 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:21:36.674610  638584 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:21:36.674735  638584 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.921842ms
	I1025 10:21:36.674869  638584 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:21:36.674985  638584 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1025 10:21:36.675113  638584 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:21:36.675225  638584 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:21:36.675373  638584 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848539291s
	I1025 10:21:36.675485  638584 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.099917517s
	I1025 10:21:36.675576  638584 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501482903s
	I1025 10:21:36.675749  638584 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:21:36.675902  638584 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:21:36.675992  638584 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:21:36.676186  638584 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-683681 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:21:36.676270  638584 kubeadm.go:318] [bootstrap-token] Using token: gh3e3n.vi8ppuvnf3ix9l58
	I1025 10:21:36.678455  638584 out.go:252]   - Configuring RBAC rules ...
	I1025 10:21:36.678655  638584 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:21:36.678741  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:21:36.678915  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:21:36.679094  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:21:36.679206  638584 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:21:36.679286  638584 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:21:36.679483  638584 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:21:36.679551  638584 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:21:36.679620  638584 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:21:36.679632  638584 kubeadm.go:318] 
	I1025 10:21:36.679721  638584 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:21:36.679732  638584 kubeadm.go:318] 
	I1025 10:21:36.679835  638584 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:21:36.679845  638584 kubeadm.go:318] 
	I1025 10:21:36.679882  638584 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:21:36.679977  638584 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:21:36.680061  638584 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:21:36.680070  638584 kubeadm.go:318] 
	I1025 10:21:36.680154  638584 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:21:36.680170  638584 kubeadm.go:318] 
	I1025 10:21:36.680221  638584 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:21:36.680229  638584 kubeadm.go:318] 
	I1025 10:21:36.680289  638584 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:21:36.680387  638584 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:21:36.680463  638584 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:21:36.680471  638584 kubeadm.go:318] 
	I1025 10:21:36.680563  638584 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:21:36.680661  638584 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:21:36.680670  638584 kubeadm.go:318] 
	I1025 10:21:36.680776  638584 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.680932  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f \
	I1025 10:21:36.680959  638584 kubeadm.go:318] 	--control-plane 
	I1025 10:21:36.680967  638584 kubeadm.go:318] 
	I1025 10:21:36.681062  638584 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:21:36.681073  638584 kubeadm.go:318] 
	I1025 10:21:36.681190  638584 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gh3e3n.vi8ppuvnf3ix9l58 \
	I1025 10:21:36.681350  638584 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d220a0fdd8ffa4188e5a3a4b5b37c16072a735a52e5507fcdbcb4d38c461642f 
	I1025 10:21:36.681383  638584 cni.go:84] Creating CNI manager for ""
	I1025 10:21:36.681402  638584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:21:36.685048  638584 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:21:34.332728  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:36.832195  631515 pod_ready.go:104] pod "coredns-66bc5c9577-gtnvx" is not "Ready", error: <nil>
	W1025 10:21:34.756305  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:37.255124  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:36.686372  638584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:21:36.691990  638584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:21:36.692012  638584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:21:36.711248  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:21:36.950001  638584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:21:36.950063  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:36.950140  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-683681 minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=embed-certs-683681 minikube.k8s.io/primary=true
	I1025 10:21:36.962716  638584 ops.go:34] apiserver oom_adj: -16
	I1025 10:21:37.040626  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:37.541457  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.041452  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:38.541265  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.041583  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.541553  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:40.041803  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:39.330926  631515 pod_ready.go:94] pod "coredns-66bc5c9577-gtnvx" is "Ready"
	I1025 10:21:39.330956  631515 pod_ready.go:86] duration metric: took 38.506063732s for pod "coredns-66bc5c9577-gtnvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.333923  631515 pod_ready.go:83] waiting for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.338091  631515 pod_ready.go:94] pod "etcd-no-preload-899665" is "Ready"
	I1025 10:21:39.338119  631515 pod_ready.go:86] duration metric: took 4.169551ms for pod "etcd-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.340510  631515 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.344782  631515 pod_ready.go:94] pod "kube-apiserver-no-preload-899665" is "Ready"
	I1025 10:21:39.344808  631515 pod_ready.go:86] duration metric: took 4.267435ms for pod "kube-apiserver-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.346928  631515 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.527867  631515 pod_ready.go:94] pod "kube-controller-manager-no-preload-899665" is "Ready"
	I1025 10:21:39.527898  631515 pod_ready.go:86] duration metric: took 180.948376ms for pod "kube-controller-manager-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:39.728099  631515 pod_ready.go:83] waiting for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.129442  631515 pod_ready.go:94] pod "kube-proxy-fdthr" is "Ready"
	I1025 10:21:40.129471  631515 pod_ready.go:86] duration metric: took 401.343438ms for pod "kube-proxy-fdthr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.329196  631515 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728428  631515 pod_ready.go:94] pod "kube-scheduler-no-preload-899665" is "Ready"
	I1025 10:21:40.728461  631515 pod_ready.go:86] duration metric: took 399.238728ms for pod "kube-scheduler-no-preload-899665" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:40.728477  631515 pod_ready.go:40] duration metric: took 39.908384057s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:40.776763  631515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:40.778765  631515 out.go:179] * Done! kubectl is now configured to use "no-preload-899665" cluster and "default" namespace by default
	I1025 10:21:40.541552  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.041202  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.540928  638584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:21:41.626698  638584 kubeadm.go:1113] duration metric: took 4.676682024s to wait for elevateKubeSystemPrivileges
	I1025 10:21:41.626740  638584 kubeadm.go:402] duration metric: took 15.779813606s to StartCluster
	I1025 10:21:41.626763  638584 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.626844  638584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:21:41.628485  638584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:21:41.628738  638584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:21:41.628758  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:21:41.628815  638584 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:21:41.628922  638584 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:21:41.628947  638584 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:21:41.628984  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.628970  638584 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:21:41.629014  638584 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:41.629033  638584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:21:41.629466  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.629530  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.632478  638584 out.go:179] * Verifying Kubernetes components...
	I1025 10:21:41.635235  638584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:21:41.654284  638584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:21:41.655720  638584 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	I1025 10:21:41.655762  638584 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:21:41.656106  638584 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:21:41.656203  638584 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.656228  638584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:21:41.656290  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.679823  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.684242  638584 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.684268  638584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:21:41.684345  638584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:21:41.712034  638584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:21:41.726056  638584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:21:41.804301  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:21:41.809475  638584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:21:41.831472  638584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:21:41.912561  638584 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 10:21:42.139096  638584 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:42.145509  638584 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1025 10:21:39.755018  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:41.756413  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:42.146900  638584 addons.go:514] duration metric: took 518.085843ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:21:42.416647  638584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-683681" context rescaled to 1 replicas
	W1025 10:21:44.142621  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:44.256001  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	W1025 10:21:46.755543  636484 pod_ready.go:104] pod "coredns-66bc5c9577-rznxv" is not "Ready", error: <nil>
	I1025 10:21:47.755253  636484 pod_ready.go:94] pod "coredns-66bc5c9577-rznxv" is "Ready"
	I1025 10:21:47.755285  636484 pod_ready.go:86] duration metric: took 31.006445495s for pod "coredns-66bc5c9577-rznxv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.758305  636484 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.763202  636484 pod_ready.go:94] pod "etcd-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.763230  636484 pod_ready.go:86] duration metric: took 4.871359ms for pod "etcd-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.765533  636484 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.769981  636484 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.770085  636484 pod_ready.go:86] duration metric: took 4.518205ms for pod "kube-apiserver-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.772484  636484 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:47.952605  636484 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:47.952636  636484 pod_ready.go:86] duration metric: took 180.129601ms for pod "kube-controller-manager-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.153608  636484 pod_ready.go:83] waiting for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.552560  636484 pod_ready.go:94] pod "kube-proxy-cvm5c" is "Ready"
	I1025 10:21:48.552591  636484 pod_ready.go:86] duration metric: took 398.954024ms for pod "kube-proxy-cvm5c" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:48.753044  636484 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152785  636484 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-767846" is "Ready"
	I1025 10:21:49.152816  636484 pod_ready.go:86] duration metric: took 399.744601ms for pod "kube-scheduler-default-k8s-diff-port-767846" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:49.152828  636484 pod_ready.go:40] duration metric: took 32.410721068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:49.201278  636484 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:49.203247  636484 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-767846" cluster and "default" namespace by default
	W1025 10:21:46.143197  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:48.642439  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	W1025 10:21:50.642613  638584 node_ready.go:57] node "embed-certs-683681" has "Ready":"False" status (will retry)
	I1025 10:21:52.643144  638584 node_ready.go:49] node "embed-certs-683681" is "Ready"
	I1025 10:21:52.643184  638584 node_ready.go:38] duration metric: took 10.504034315s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:21:52.643202  638584 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:21:52.643262  638584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:52.659492  638584 api_server.go:72] duration metric: took 11.030720868s to wait for apiserver process to appear ...
	I1025 10:21:52.659528  638584 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:21:52.659553  638584 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:21:52.666017  638584 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:21:52.667256  638584 api_server.go:141] control plane version: v1.34.1
	I1025 10:21:52.667289  638584 api_server.go:131] duration metric: took 7.752823ms to wait for apiserver health ...
	I1025 10:21:52.667300  638584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:21:52.670860  638584 system_pods.go:59] 8 kube-system pods found
	I1025 10:21:52.670907  638584 system_pods.go:61] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.670917  638584 system_pods.go:61] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.670928  638584 system_pods.go:61] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.670934  638584 system_pods.go:61] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.670944  638584 system_pods.go:61] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.670949  638584 system_pods.go:61] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.670958  638584 system_pods.go:61] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.670966  638584 system_pods.go:61] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.670977  638584 system_pods.go:74] duration metric: took 3.669298ms to wait for pod list to return data ...
	I1025 10:21:52.670994  638584 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:21:52.673975  638584 default_sa.go:45] found service account: "default"
	I1025 10:21:52.674010  638584 default_sa.go:55] duration metric: took 3.005154ms for default service account to be created ...
	I1025 10:21:52.674024  638584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:21:52.677130  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.677169  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.677181  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.677191  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.677195  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.677201  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.677206  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.677212  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.677223  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.677255  638584 retry.go:31] will retry after 207.699186ms: missing components: kube-dns
	I1025 10:21:52.889747  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:52.889810  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:52.889819  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:52.889834  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:52.889839  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:52.889854  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:52.889859  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:52.889867  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:52.889879  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:21:52.889906  638584 retry.go:31] will retry after 319.387436ms: missing components: kube-dns
	I1025 10:21:53.212708  638584 system_pods.go:86] 8 kube-system pods found
	I1025 10:21:53.212741  638584 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:21:53.212748  638584 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running
	I1025 10:21:53.212753  638584 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running
	I1025 10:21:53.212757  638584 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running
	I1025 10:21:53.212762  638584 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running
	I1025 10:21:53.212765  638584 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running
	I1025 10:21:53.212769  638584 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running
	I1025 10:21:53.212772  638584 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running
	I1025 10:21:53.212781  638584 system_pods.go:126] duration metric: took 538.748598ms to wait for k8s-apps to be running ...
	I1025 10:21:53.212792  638584 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:21:53.212838  638584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:53.227721  638584 system_svc.go:56] duration metric: took 14.91387ms WaitForService to wait for kubelet
	I1025 10:21:53.227757  638584 kubeadm.go:586] duration metric: took 11.598992037s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:21:53.227783  638584 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:21:53.231073  638584 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:21:53.231102  638584 node_conditions.go:123] node cpu capacity is 8
	I1025 10:21:53.231116  638584 node_conditions.go:105] duration metric: took 3.327789ms to run NodePressure ...
	I1025 10:21:53.231127  638584 start.go:241] waiting for startup goroutines ...
	I1025 10:21:53.231134  638584 start.go:246] waiting for cluster config update ...
	I1025 10:21:53.231145  638584 start.go:255] writing updated cluster config ...
	I1025 10:21:53.231433  638584 ssh_runner.go:195] Run: rm -f paused
	I1025 10:21:53.235996  638584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:53.239628  638584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.246519  638584 pod_ready.go:94] pod "coredns-66bc5c9577-545dp" is "Ready"
	I1025 10:21:54.246556  638584 pod_ready.go:86] duration metric: took 1.006903697s for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.249657  638584 pod_ready.go:83] waiting for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.254284  638584 pod_ready.go:94] pod "etcd-embed-certs-683681" is "Ready"
	I1025 10:21:54.254351  638584 pod_ready.go:86] duration metric: took 4.629709ms for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.256768  638584 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.261130  638584 pod_ready.go:94] pod "kube-apiserver-embed-certs-683681" is "Ready"
	I1025 10:21:54.261157  638584 pod_ready.go:86] duration metric: took 4.363563ms for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.263224  638584 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.443581  638584 pod_ready.go:94] pod "kube-controller-manager-embed-certs-683681" is "Ready"
	I1025 10:21:54.443610  638584 pod_ready.go:86] duration metric: took 180.36054ms for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:54.644082  638584 pod_ready.go:83] waiting for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.044226  638584 pod_ready.go:94] pod "kube-proxy-dbks6" is "Ready"
	I1025 10:21:55.044259  638584 pod_ready.go:86] duration metric: took 400.15124ms for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.243900  638584 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643886  638584 pod_ready.go:94] pod "kube-scheduler-embed-certs-683681" is "Ready"
	I1025 10:21:55.643919  638584 pod_ready.go:86] duration metric: took 399.992242ms for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:21:55.643935  638584 pod_ready.go:40] duration metric: took 2.407895178s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:21:55.697477  638584 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:21:55.699399  638584 out.go:179] * Done! kubectl is now configured to use "embed-certs-683681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:21:52 embed-certs-683681 crio[780]: time="2025-10-25T10:21:52.937502646Z" level=info msg="Starting container: a45124551eef22752205d099cbf92a9985bc61cd6c57b6aebbcbb8299c4b9a67" id=a2f911a5-4190-434b-b654-162922ef697a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:52 embed-certs-683681 crio[780]: time="2025-10-25T10:21:52.940093807Z" level=info msg="Started container" PID=1847 containerID=a45124551eef22752205d099cbf92a9985bc61cd6c57b6aebbcbb8299c4b9a67 description=kube-system/coredns-66bc5c9577-545dp/coredns id=a2f911a5-4190-434b-b654-162922ef697a name=/runtime.v1.RuntimeService/StartContainer sandboxID=961fcad07e1be3f3254eb1b983c054a2accd50e25c7a3e2af390e314a5a781ef
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.164139676Z" level=info msg="Running pod sandbox: default/busybox/POD" id=86014998-ef8b-4140-b2a7-3925a3a2859d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.164245813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.170093162Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f5b116965ca3e6c5e066274d8f1817cf928fac77e8dd8f80d1fe7d34a8169786 UID:4edb7a57-15b4-4297-899b-96dd0dc4a482 NetNS:/var/run/netns/707ec472-555d-46b8-91df-8f4069e04ff5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e0e210}] Aliases:map[]}"
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.170127996Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.181928174Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f5b116965ca3e6c5e066274d8f1817cf928fac77e8dd8f80d1fe7d34a8169786 UID:4edb7a57-15b4-4297-899b-96dd0dc4a482 NetNS:/var/run/netns/707ec472-555d-46b8-91df-8f4069e04ff5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e0e210}] Aliases:map[]}"
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.182070135Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.18287046Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.183641679Z" level=info msg="Ran pod sandbox f5b116965ca3e6c5e066274d8f1817cf928fac77e8dd8f80d1fe7d34a8169786 with infra container: default/busybox/POD" id=86014998-ef8b-4140-b2a7-3925a3a2859d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.184995509Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5b300e46-3552-4703-9468-62d8bd69fe75 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.185146648Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5b300e46-3552-4703-9468-62d8bd69fe75 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.185198804Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5b300e46-3552-4703-9468-62d8bd69fe75 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.18605143Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b430a62d-44bb-47c2-87d6-7d9553872606 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:21:56 embed-certs-683681 crio[780]: time="2025-10-25T10:21:56.188056031Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.127595624Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b430a62d-44bb-47c2-87d6-7d9553872606 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.128488941Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a543ff1b-ff79-4bab-85a8-a1f6222a43e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.130425615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3bdee106-6a9e-4f7f-9b46-fb641b78d3eb name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.134565209Z" level=info msg="Creating container: default/busybox/busybox" id=f0f2c5ab-1f95-41c7-86c5-bab70f96b4a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.134732142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.138950973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.139460821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.169732273Z" level=info msg="Created container e697946102ad5a4ac944ff2f55acec5a70f86c856923cc74dc650ba60ffaf06c: default/busybox/busybox" id=f0f2c5ab-1f95-41c7-86c5-bab70f96b4a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.170640369Z" level=info msg="Starting container: e697946102ad5a4ac944ff2f55acec5a70f86c856923cc74dc650ba60ffaf06c" id=70ecb96b-1a6d-457c-b9b5-1a1efcc7239d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:21:58 embed-certs-683681 crio[780]: time="2025-10-25T10:21:58.172932159Z" level=info msg="Started container" PID=1923 containerID=e697946102ad5a4ac944ff2f55acec5a70f86c856923cc74dc650ba60ffaf06c description=default/busybox/busybox id=70ecb96b-1a6d-457c-b9b5-1a1efcc7239d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5b116965ca3e6c5e066274d8f1817cf928fac77e8dd8f80d1fe7d34a8169786
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	e697946102ad5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f5b116965ca3e       busybox                                      default
	a45124551eef2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   961fcad07e1be       coredns-66bc5c9577-545dp                     kube-system
	270c7795eab97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   4e91d9170f828       storage-provisioner                          kube-system
	f7a6452fa1591       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   e13b22a9fffdd       kube-proxy-dbks6                             kube-system
	187d1a42ceef2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   c5a3c4510a839       kindnet-5zktx                                kube-system
	493b0ace3f760       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   00f8c72d8ca24       etcd-embed-certs-683681                      kube-system
	a5cf89929c539       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   ee8d57a6b0666       kube-scheduler-embed-certs-683681            kube-system
	b872d730807e6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   8586f1af58b93       kube-controller-manager-embed-certs-683681   kube-system
	d067962793d6a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   486ad2a63946c       kube-apiserver-embed-certs-683681            kube-system
	
	
	==> coredns [a45124551eef22752205d099cbf92a9985bc61cd6c57b6aebbcbb8299c4b9a67] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40542 - 12144 "HINFO IN 9126712858572508532.8155006675173745341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.136422511s
	
	
	==> describe nodes <==
	Name:               embed-certs-683681
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-683681
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=embed-certs-683681
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:21:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-683681
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:22:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:56 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:56 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:56 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:56 +0000   Sat, 25 Oct 2025 10:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-683681
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b190e06d-a88f-488c-8710-85f0327cbd4d
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-545dp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-683681                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-5zktx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-683681             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-683681    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-dbks6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-683681             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-683681 event: Registered Node embed-certs-683681 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-683681 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [493b0ace3f760a7487c8d75cdd78e6c6de0154391045fe6a8b585ba2a75b4caa] <==
	{"level":"warn","ts":"2025-10-25T10:21:32.587556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.599778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.608104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.617603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.624784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.634497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.642100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.649550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.657095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.664024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.672093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.679220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.687871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.696951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.705778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.713857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.721810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.729789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.737702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.745070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.752438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.768789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.776964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.783905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:32.841353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39466","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:22:06 up  2:04,  0 user,  load average: 4.96, 5.06, 5.93
	Linux embed-certs-683681 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [187d1a42ceef256fb2cdb504927f951b4783999a9a39339b459a7ef3e21bb7ea] <==
	I1025 10:21:41.792264       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:21:41.792910       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 10:21:41.793049       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:21:41.793067       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:21:41.793088       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:21:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:21:42.062787       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:21:42.062818       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:21:42.062892       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:21:42.063081       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:21:42.363374       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:21:42.363424       1 metrics.go:72] Registering metrics
	I1025 10:21:42.363544       1 controller.go:711] "Syncing nftables rules"
	I1025 10:21:52.063150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:21:52.063240       1 main.go:301] handling current node
	I1025 10:22:02.064408       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:22:02.064441       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d067962793d6a1c12e547899d3c26044d5e0072bee2d8616e5d939ab62aa13bb] <==
	I1025 10:21:33.403917       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:21:33.403933       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:21:33.405099       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:21:33.412744       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:21:33.421482       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:21:33.429777       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:21:33.446588       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:21:34.307706       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:21:34.312299       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:21:34.312335       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:21:34.924596       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:21:34.972815       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:21:35.115688       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:21:35.123358       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1025 10:21:35.124662       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:21:35.130569       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:21:35.349813       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:21:36.067258       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:21:36.078719       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:21:36.086804       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:21:40.702588       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:21:41.054537       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:21:41.058679       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:21:41.104120       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 10:22:04.964501       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:36950: use of closed network connection
	
	
	==> kube-controller-manager [b872d730807e62ab39eb3c66cf5e9a3ce3f393a412675036e5c00a42da8abeac] <==
	I1025 10:21:40.348443       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:21:40.349380       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:21:40.349405       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:21:40.349386       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:21:40.349404       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:21:40.349459       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:21:40.349473       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:21:40.349456       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:21:40.349523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:21:40.349721       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:21:40.350073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:21:40.350294       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:21:40.351724       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:21:40.352903       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:21:40.352919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:21:40.352978       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:21:40.353021       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:21:40.353028       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:21:40.353032       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:21:40.354100       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:21:40.359340       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:21:40.359890       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-683681" podCIDRs=["10.244.0.0/24"]
	I1025 10:21:40.360797       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:21:40.371394       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:21:55.271602       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f7a6452fa1591ca32e526b6bfb7c55e711c8ca0ea62965c07f8b9d9e9af8fb9a] <==
	I1025 10:21:41.547707       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:21:41.625743       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:21:41.726448       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:21:41.726492       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 10:21:41.726628       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:21:41.752479       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:21:41.752549       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:21:41.760744       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:21:41.761131       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:21:41.761168       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:21:41.762939       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:21:41.763475       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:21:41.763066       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:21:41.763570       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:21:41.763170       1 config.go:200] "Starting service config controller"
	I1025 10:21:41.763582       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:21:41.763441       1 config.go:309] "Starting node config controller"
	I1025 10:21:41.763608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:21:41.763619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:21:41.863602       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:21:41.864058       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:21:41.864092       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5cf89929c539df838eb509d887ee54bb8a198c1c2f4b78430175edc97105186] <==
	I1025 10:21:33.846048       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:21:33.846098       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:21:33.846466       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:21:33.846518       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:21:33.848045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 10:21:33.849740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:21:33.849957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:21:33.850098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:21:33.849960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:21:33.849959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:21:33.850115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:21:33.850154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:21:33.850192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:21:33.850162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:21:33.850285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:21:33.850309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:21:33.850417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:21:33.850312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:21:33.850470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:21:33.850517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:21:33.850577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:21:33.850658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:21:33.850695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:21:34.707090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 10:21:35.247313       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:21:36 embed-certs-683681 kubelet[1316]: I1025 10:21:36.969609    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-683681" podStartSLOduration=0.96958494 podStartE2EDuration="969.58494ms" podCreationTimestamp="2025-10-25 10:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:36.959070211 +0000 UTC m=+1.129392402" watchObservedRunningTime="2025-10-25 10:21:36.96958494 +0000 UTC m=+1.139907134"
	Oct 25 10:21:36 embed-certs-683681 kubelet[1316]: I1025 10:21:36.981180    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-683681" podStartSLOduration=0.981154073 podStartE2EDuration="981.154073ms" podCreationTimestamp="2025-10-25 10:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:36.980775523 +0000 UTC m=+1.151097713" watchObservedRunningTime="2025-10-25 10:21:36.981154073 +0000 UTC m=+1.151476266"
	Oct 25 10:21:36 embed-certs-683681 kubelet[1316]: I1025 10:21:36.981424    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-683681" podStartSLOduration=0.981409191 podStartE2EDuration="981.409191ms" podCreationTimestamp="2025-10-25 10:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:36.969855307 +0000 UTC m=+1.140177498" watchObservedRunningTime="2025-10-25 10:21:36.981409191 +0000 UTC m=+1.151731382"
	Oct 25 10:21:37 embed-certs-683681 kubelet[1316]: I1025 10:21:37.005208    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-683681" podStartSLOduration=1.005189248 podStartE2EDuration="1.005189248s" podCreationTimestamp="2025-10-25 10:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:36.993389246 +0000 UTC m=+1.163711437" watchObservedRunningTime="2025-10-25 10:21:37.005189248 +0000 UTC m=+1.175511437"
	Oct 25 10:21:40 embed-certs-683681 kubelet[1316]: I1025 10:21:40.452823    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:21:40 embed-certs-683681 kubelet[1316]: I1025 10:21:40.453682    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244519    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3398616a-6eb4-432e-bb84-ae1f166c7e71-cni-cfg\") pod \"kindnet-5zktx\" (UID: \"3398616a-6eb4-432e-bb84-ae1f166c7e71\") " pod="kube-system/kindnet-5zktx"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244599    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/551b9ca3-e53d-4be0-bcb5-b96d76be6c14-lib-modules\") pod \"kube-proxy-dbks6\" (UID: \"551b9ca3-e53d-4be0-bcb5-b96d76be6c14\") " pod="kube-system/kube-proxy-dbks6"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244677    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3398616a-6eb4-432e-bb84-ae1f166c7e71-xtables-lock\") pod \"kindnet-5zktx\" (UID: \"3398616a-6eb4-432e-bb84-ae1f166c7e71\") " pod="kube-system/kindnet-5zktx"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244705    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jm8\" (UniqueName: \"kubernetes.io/projected/551b9ca3-e53d-4be0-bcb5-b96d76be6c14-kube-api-access-b6jm8\") pod \"kube-proxy-dbks6\" (UID: \"551b9ca3-e53d-4be0-bcb5-b96d76be6c14\") " pod="kube-system/kube-proxy-dbks6"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244735    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzc5b\" (UniqueName: \"kubernetes.io/projected/3398616a-6eb4-432e-bb84-ae1f166c7e71-kube-api-access-kzc5b\") pod \"kindnet-5zktx\" (UID: \"3398616a-6eb4-432e-bb84-ae1f166c7e71\") " pod="kube-system/kindnet-5zktx"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244761    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/551b9ca3-e53d-4be0-bcb5-b96d76be6c14-kube-proxy\") pod \"kube-proxy-dbks6\" (UID: \"551b9ca3-e53d-4be0-bcb5-b96d76be6c14\") " pod="kube-system/kube-proxy-dbks6"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244812    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3398616a-6eb4-432e-bb84-ae1f166c7e71-lib-modules\") pod \"kindnet-5zktx\" (UID: \"3398616a-6eb4-432e-bb84-ae1f166c7e71\") " pod="kube-system/kindnet-5zktx"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.244835    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/551b9ca3-e53d-4be0-bcb5-b96d76be6c14-xtables-lock\") pod \"kube-proxy-dbks6\" (UID: \"551b9ca3-e53d-4be0-bcb5-b96d76be6c14\") " pod="kube-system/kube-proxy-dbks6"
	Oct 25 10:21:41 embed-certs-683681 kubelet[1316]: I1025 10:21:41.980541    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dbks6" podStartSLOduration=0.980510692 podStartE2EDuration="980.510692ms" podCreationTimestamp="2025-10-25 10:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:41.966256513 +0000 UTC m=+6.136578705" watchObservedRunningTime="2025-10-25 10:21:41.980510692 +0000 UTC m=+6.150832886"
	Oct 25 10:21:43 embed-certs-683681 kubelet[1316]: I1025 10:21:43.218140    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5zktx" podStartSLOduration=2.218107286 podStartE2EDuration="2.218107286s" podCreationTimestamp="2025-10-25 10:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:41.980715816 +0000 UTC m=+6.151037991" watchObservedRunningTime="2025-10-25 10:21:43.218107286 +0000 UTC m=+7.388429474"
	Oct 25 10:21:52 embed-certs-683681 kubelet[1316]: I1025 10:21:52.536755    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:21:52 embed-certs-683681 kubelet[1316]: I1025 10:21:52.613418    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2709fe3-a1d1-4394-8cf7-3776dc8fd318-config-volume\") pod \"coredns-66bc5c9577-545dp\" (UID: \"a2709fe3-a1d1-4394-8cf7-3776dc8fd318\") " pod="kube-system/coredns-66bc5c9577-545dp"
	Oct 25 10:21:52 embed-certs-683681 kubelet[1316]: I1025 10:21:52.613482    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42d81686-dd78-4ed1-9ead-cbcdca1d14ce-tmp\") pod \"storage-provisioner\" (UID: \"42d81686-dd78-4ed1-9ead-cbcdca1d14ce\") " pod="kube-system/storage-provisioner"
	Oct 25 10:21:52 embed-certs-683681 kubelet[1316]: I1025 10:21:52.613504    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrw4\" (UniqueName: \"kubernetes.io/projected/42d81686-dd78-4ed1-9ead-cbcdca1d14ce-kube-api-access-cqrw4\") pod \"storage-provisioner\" (UID: \"42d81686-dd78-4ed1-9ead-cbcdca1d14ce\") " pod="kube-system/storage-provisioner"
	Oct 25 10:21:52 embed-certs-683681 kubelet[1316]: I1025 10:21:52.613532    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkdd\" (UniqueName: \"kubernetes.io/projected/a2709fe3-a1d1-4394-8cf7-3776dc8fd318-kube-api-access-2bkdd\") pod \"coredns-66bc5c9577-545dp\" (UID: \"a2709fe3-a1d1-4394-8cf7-3776dc8fd318\") " pod="kube-system/coredns-66bc5c9577-545dp"
	Oct 25 10:21:53 embed-certs-683681 kubelet[1316]: I1025 10:21:53.012997    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-545dp" podStartSLOduration=12.01296999 podStartE2EDuration="12.01296999s" podCreationTimestamp="2025-10-25 10:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:53.000598905 +0000 UTC m=+17.170921128" watchObservedRunningTime="2025-10-25 10:21:53.01296999 +0000 UTC m=+17.183292251"
	Oct 25 10:21:53 embed-certs-683681 kubelet[1316]: I1025 10:21:53.013221    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.013207115 podStartE2EDuration="11.013207115s" podCreationTimestamp="2025-10-25 10:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:21:53.01318345 +0000 UTC m=+17.183505623" watchObservedRunningTime="2025-10-25 10:21:53.013207115 +0000 UTC m=+17.183529307"
	Oct 25 10:21:55 embed-certs-683681 kubelet[1316]: I1025 10:21:55.932410    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nl2x\" (UniqueName: \"kubernetes.io/projected/4edb7a57-15b4-4297-899b-96dd0dc4a482-kube-api-access-7nl2x\") pod \"busybox\" (UID: \"4edb7a57-15b4-4297-899b-96dd0dc4a482\") " pod="default/busybox"
	Oct 25 10:21:59 embed-certs-683681 kubelet[1316]: I1025 10:21:59.023544    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.079331192 podStartE2EDuration="4.023520138s" podCreationTimestamp="2025-10-25 10:21:55 +0000 UTC" firstStartedPulling="2025-10-25 10:21:56.18555528 +0000 UTC m=+20.355877453" lastFinishedPulling="2025-10-25 10:21:58.12974421 +0000 UTC m=+22.300066399" observedRunningTime="2025-10-25 10:21:59.023164604 +0000 UTC m=+23.193486794" watchObservedRunningTime="2025-10-25 10:21:59.023520138 +0000 UTC m=+23.193842329"
	
	
	==> storage-provisioner [270c7795eab979d8879bf4add6cd1959ed172a9602206454cb7a8e71ad1ed8eb] <==
	I1025 10:21:52.941033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:21:52.951796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:21:52.951866       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:21:52.954337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.960799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:21:52.960987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:21:52.961212       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-683681_cf7ac3fa-2620-4f4d-b1fb-59100f72949c!
	I1025 10:21:52.961517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37804b28-e18f-4166-93e2-5ef50997fe60", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-683681_cf7ac3fa-2620-4f4d-b1fb-59100f72949c became leader
	W1025 10:21:52.963599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:52.969378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:21:53.061727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-683681_cf7ac3fa-2620-4f4d-b1fb-59100f72949c!
	W1025 10:21:54.973212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:54.978110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.981675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:56.986289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:58.990187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:21:58.997261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:01.001182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:01.006262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:03.010504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:03.015738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:05.019205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:22:05.024775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-683681 -n embed-certs-683681
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-683681 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-683681 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-683681 --alsologtostderr -v=1: exit status 80 (1.915608852s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-683681 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:23:23.913628  653652 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:23:23.913912  653652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:23:23.913928  653652 out.go:374] Setting ErrFile to fd 2...
	I1025 10:23:23.913933  653652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:23:23.914168  653652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:23:23.914417  653652 out.go:368] Setting JSON to false
	I1025 10:23:23.914461  653652 mustload.go:65] Loading cluster: embed-certs-683681
	I1025 10:23:23.914868  653652 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:23:23.915263  653652 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:23:23.936236  653652 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:23:23.936637  653652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:23:23.999175  653652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 10:23:23.988719212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:23:23.999844  653652 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-683681 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:23:24.002515  653652 out.go:179] * Pausing node embed-certs-683681 ... 
	I1025 10:23:24.004278  653652 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:23:24.004639  653652 ssh_runner.go:195] Run: systemctl --version
	I1025 10:23:24.004698  653652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:23:24.024763  653652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:23:24.126876  653652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:23:24.140729  653652 pause.go:52] kubelet running: true
	I1025 10:23:24.140798  653652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:23:24.303906  653652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:23:24.304044  653652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:23:24.378270  653652 cri.go:89] found id: "8cadb44f0328e5bdc8a75a15aa015760ee35f78df670f55e688fcc7b1659aeef"
	I1025 10:23:24.378302  653652 cri.go:89] found id: "9508b4a27687ec159979bd17e4bb05c52528b9b9205a5dd4c224cf45bbbdf857"
	I1025 10:23:24.378306  653652 cri.go:89] found id: "5a596c77f5df556e869709b8cf5dcb9c78dc06441ded8c2f7831e35736644375"
	I1025 10:23:24.378310  653652 cri.go:89] found id: "f7c43259b62da489acda62b9d2e1e2867140658c7c81ddd6b20c46ec720bb6b6"
	I1025 10:23:24.378312  653652 cri.go:89] found id: "0e43c9fb1569ef6e07a5677d2a15b6334bc4fe7db76411edffd13663fe4716c1"
	I1025 10:23:24.378333  653652 cri.go:89] found id: "e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d"
	I1025 10:23:24.378339  653652 cri.go:89] found id: "dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6"
	I1025 10:23:24.378344  653652 cri.go:89] found id: "34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58"
	I1025 10:23:24.378349  653652 cri.go:89] found id: "a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4"
	I1025 10:23:24.378372  653652 cri.go:89] found id: "488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	I1025 10:23:24.378381  653652 cri.go:89] found id: "ae91c0eace8a71b1845d97507f08b3cce89463dc558fab2ed073d1b251d048a2"
	I1025 10:23:24.378384  653652 cri.go:89] found id: ""
	I1025 10:23:24.378435  653652 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:23:24.392268  653652 retry.go:31] will retry after 319.265882ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:23:24Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:23:24.711712  653652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:23:24.726257  653652 pause.go:52] kubelet running: false
	I1025 10:23:24.726339  653652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:23:24.864026  653652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:23:24.864110  653652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:23:24.938181  653652 cri.go:89] found id: "8cadb44f0328e5bdc8a75a15aa015760ee35f78df670f55e688fcc7b1659aeef"
	I1025 10:23:24.938204  653652 cri.go:89] found id: "9508b4a27687ec159979bd17e4bb05c52528b9b9205a5dd4c224cf45bbbdf857"
	I1025 10:23:24.938209  653652 cri.go:89] found id: "5a596c77f5df556e869709b8cf5dcb9c78dc06441ded8c2f7831e35736644375"
	I1025 10:23:24.938214  653652 cri.go:89] found id: "f7c43259b62da489acda62b9d2e1e2867140658c7c81ddd6b20c46ec720bb6b6"
	I1025 10:23:24.938218  653652 cri.go:89] found id: "0e43c9fb1569ef6e07a5677d2a15b6334bc4fe7db76411edffd13663fe4716c1"
	I1025 10:23:24.938222  653652 cri.go:89] found id: "e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d"
	I1025 10:23:24.938227  653652 cri.go:89] found id: "dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6"
	I1025 10:23:24.938230  653652 cri.go:89] found id: "34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58"
	I1025 10:23:24.938234  653652 cri.go:89] found id: "a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4"
	I1025 10:23:24.938241  653652 cri.go:89] found id: "488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	I1025 10:23:24.938245  653652 cri.go:89] found id: "ae91c0eace8a71b1845d97507f08b3cce89463dc558fab2ed073d1b251d048a2"
	I1025 10:23:24.938249  653652 cri.go:89] found id: ""
	I1025 10:23:24.938298  653652 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:23:24.951942  653652 retry.go:31] will retry after 556.595991ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:23:24Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:23:25.508753  653652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:23:25.524025  653652 pause.go:52] kubelet running: false
	I1025 10:23:25.524084  653652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:23:25.668238  653652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:23:25.668336  653652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:23:25.739868  653652 cri.go:89] found id: "8cadb44f0328e5bdc8a75a15aa015760ee35f78df670f55e688fcc7b1659aeef"
	I1025 10:23:25.739897  653652 cri.go:89] found id: "9508b4a27687ec159979bd17e4bb05c52528b9b9205a5dd4c224cf45bbbdf857"
	I1025 10:23:25.739902  653652 cri.go:89] found id: "5a596c77f5df556e869709b8cf5dcb9c78dc06441ded8c2f7831e35736644375"
	I1025 10:23:25.739906  653652 cri.go:89] found id: "f7c43259b62da489acda62b9d2e1e2867140658c7c81ddd6b20c46ec720bb6b6"
	I1025 10:23:25.739909  653652 cri.go:89] found id: "0e43c9fb1569ef6e07a5677d2a15b6334bc4fe7db76411edffd13663fe4716c1"
	I1025 10:23:25.739913  653652 cri.go:89] found id: "e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d"
	I1025 10:23:25.739915  653652 cri.go:89] found id: "dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6"
	I1025 10:23:25.739917  653652 cri.go:89] found id: "34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58"
	I1025 10:23:25.739920  653652 cri.go:89] found id: "a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4"
	I1025 10:23:25.739931  653652 cri.go:89] found id: "488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	I1025 10:23:25.739934  653652 cri.go:89] found id: "ae91c0eace8a71b1845d97507f08b3cce89463dc558fab2ed073d1b251d048a2"
	I1025 10:23:25.739936  653652 cri.go:89] found id: ""
	I1025 10:23:25.739977  653652 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:23:25.755547  653652 out.go:203] 
	W1025 10:23:25.757045  653652 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:23:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:23:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:23:25.757066  653652 out.go:285] * 
	* 
	W1025 10:23:25.761437  653652 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:23:25.762866  653652 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-683681 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-683681
helpers_test.go:243: (dbg) docker inspect embed-certs-683681:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878",
	        "Created": "2025-10-25T10:21:16.235046016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 651136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:22:26.068501528Z",
	            "FinishedAt": "2025-10-25T10:22:25.128540678Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/hostname",
	        "HostsPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/hosts",
	        "LogPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878-json.log",
	        "Name": "/embed-certs-683681",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-683681:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-683681",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878",
	                "LowerDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-683681",
	                "Source": "/var/lib/docker/volumes/embed-certs-683681/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-683681",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-683681",
	                "name.minikube.sigs.k8s.io": "embed-certs-683681",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afd5f9ce71f28c93399bb97ba8437374eb3f6b307416eb354a32ca1583210d02",
	            "SandboxKey": "/var/run/docker/netns/afd5f9ce71f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-683681": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:7f:4b:23:62:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afda803609319b40fede74121fd584f53a0a22be2a797d9c1be1e1370a5a8dff",
	                    "EndpointID": "df4d9db28a156fa3904fa5a42edd76ec577c26329a871f9272cdfbeef93a64ed",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-683681",
	                        "664aed4a01f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681: exit status 2 (357.427083ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-683681 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-683681 logs -n 25: (1.160749662s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                              │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                     │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                     │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                          │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                          │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                               │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                              │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                     │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-767846 image list --format=json                                                                                                                    │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-767846 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                     │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-767846                                                                                                                                          │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ stop    │ -p embed-certs-683681 --alsologtostderr -v=3                                                                                                                             │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ delete  │ -p default-k8s-diff-port-767846                                                                                                                                          │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-683681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:23 UTC │
	│ image   │ embed-certs-683681 image list --format=json                                                                                                                              │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ pause   │ -p embed-certs-683681 --alsologtostderr -v=1                                                                                                                             │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:22:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:22:25.811526  650937 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:22:25.811824  650937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:25.811836  650937 out.go:374] Setting ErrFile to fd 2...
	I1025 10:22:25.811841  650937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:25.812034  650937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:22:25.812528  650937 out.go:368] Setting JSON to false
	I1025 10:22:25.813671  650937 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7495,"bootTime":1761380251,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:22:25.813796  650937 start.go:141] virtualization: kvm guest
	I1025 10:22:25.816027  650937 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:22:25.817628  650937 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:22:25.817669  650937 notify.go:220] Checking for updates...
	I1025 10:22:25.820589  650937 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:22:25.821848  650937 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:22:25.823064  650937 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:22:25.824573  650937 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:22:25.825915  650937 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:22:25.828050  650937 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:25.828919  650937 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:22:25.855578  650937 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:22:25.855692  650937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:22:25.918056  650937 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 10:22:25.906868562 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:22:25.918205  650937 docker.go:318] overlay module found
	I1025 10:22:25.920390  650937 out.go:179] * Using the docker driver based on existing profile
	I1025 10:22:25.921798  650937 start.go:305] selected driver: docker
	I1025 10:22:25.921824  650937 start.go:925] validating driver "docker" against &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:22:25.921957  650937 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:22:25.922800  650937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:22:25.989584  650937 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 10:22:25.978026276 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:22:25.989904  650937 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:22:25.989940  650937 cni.go:84] Creating CNI manager for ""
	I1025 10:22:25.989975  650937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:22:25.990022  650937 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:22:25.992139  650937 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:22:25.993435  650937 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:22:25.994691  650937 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:22:25.995868  650937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:22:25.995916  650937 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:22:25.995925  650937 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:22:25.995965  650937 cache.go:58] Caching tarball of preloaded images
	I1025 10:22:25.996079  650937 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:22:25.996092  650937 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:22:25.996218  650937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:22:26.018405  650937 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:22:26.018433  650937 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:22:26.018459  650937 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:22:26.018491  650937 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:22:26.018597  650937 start.go:364] duration metric: took 58.454µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:22:26.018625  650937 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:22:26.018637  650937 fix.go:54] fixHost starting: 
	I1025 10:22:26.018950  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:26.037651  650937 fix.go:112] recreateIfNeeded on embed-certs-683681: state=Stopped err=<nil>
	W1025 10:22:26.037685  650937 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:22:26.039795  650937 out.go:252] * Restarting existing docker container for "embed-certs-683681" ...
	I1025 10:22:26.039883  650937 cli_runner.go:164] Run: docker start embed-certs-683681
	I1025 10:22:26.298888  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:26.322052  650937 kic.go:430] container "embed-certs-683681" state is running.
	I1025 10:22:26.322558  650937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:22:26.342786  650937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:22:26.343059  650937 machine.go:93] provisionDockerMachine start ...
	I1025 10:22:26.343126  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:26.363148  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:26.363460  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:26.363477  650937 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:22:26.364238  650937 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42172->127.0.0.1:33133: read: connection reset by peer
	I1025 10:22:29.510043  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:22:29.510096  650937 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:22:29.510185  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:29.529913  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:29.530146  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:29.530159  650937 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	I1025 10:22:29.686569  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:22:29.686645  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:29.706958  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:29.707260  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:29.707306  650937 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:22:29.851908  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:22:29.851946  650937 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:22:29.851970  650937 ubuntu.go:190] setting up certificates
	I1025 10:22:29.851989  650937 provision.go:84] configureAuth start
	I1025 10:22:29.852043  650937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:22:29.871251  650937 provision.go:143] copyHostCerts
	I1025 10:22:29.871352  650937 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:22:29.871379  650937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:22:29.871471  650937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:22:29.871635  650937 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:22:29.871678  650937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:22:29.871729  650937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:22:29.871814  650937 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:22:29.871822  650937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:22:29.871863  650937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:22:29.872085  650937 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:22:29.984343  650937 provision.go:177] copyRemoteCerts
	I1025 10:22:29.984415  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:22:29.984456  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.003605  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.106691  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:22:30.125676  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 10:22:30.145055  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:22:30.164000  650937 provision.go:87] duration metric: took 311.99694ms to configureAuth
	I1025 10:22:30.164030  650937 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:22:30.164234  650937 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:30.164356  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.183441  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:30.183697  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:30.183724  650937 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:22:30.491506  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:22:30.491534  650937 machine.go:96] duration metric: took 4.148458506s to provisionDockerMachine
	I1025 10:22:30.491550  650937 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:22:30.491566  650937 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:22:30.491634  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:22:30.491687  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.511988  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.616719  650937 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:22:30.620710  650937 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:22:30.620740  650937 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:22:30.620754  650937 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:22:30.620807  650937 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:22:30.620876  650937 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:22:30.620973  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:22:30.629162  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:22:30.648583  650937 start.go:296] duration metric: took 157.013923ms for postStartSetup
	I1025 10:22:30.648667  650937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:22:30.648705  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.667816  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.768186  650937 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:22:30.773180  650937 fix.go:56] duration metric: took 4.754534958s for fixHost
	I1025 10:22:30.773214  650937 start.go:83] releasing machines lock for "embed-certs-683681", held for 4.754601126s
	I1025 10:22:30.773296  650937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:22:30.792498  650937 ssh_runner.go:195] Run: cat /version.json
	I1025 10:22:30.792549  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.792594  650937 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:22:30.792699  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.812116  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.812288  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.965514  650937 ssh_runner.go:195] Run: systemctl --version
	I1025 10:22:30.972715  650937 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:22:31.012006  650937 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:22:31.017272  650937 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:22:31.017362  650937 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:22:31.026209  650937 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:22:31.026242  650937 start.go:495] detecting cgroup driver to use...
	I1025 10:22:31.026283  650937 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:22:31.026350  650937 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:22:31.042521  650937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:22:31.056334  650937 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:22:31.056406  650937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:22:31.073008  650937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:22:31.087153  650937 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:22:31.175207  650937 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:22:31.256726  650937 docker.go:234] disabling docker service ...
	I1025 10:22:31.256796  650937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:22:31.272066  650937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:22:31.285614  650937 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:22:31.367461  650937 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:22:31.449361  650937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:22:31.463666  650937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:22:31.479927  650937 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:22:31.479993  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.490565  650937 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:22:31.490649  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.500815  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.510530  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.520022  650937 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:22:31.529061  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.538958  650937 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.548107  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.557729  650937 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:22:31.565991  650937 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:22:31.574556  650937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:22:31.657549  650937 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:22:31.775056  650937 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:22:31.775132  650937 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:22:31.779628  650937 start.go:563] Will wait 60s for crictl version
	I1025 10:22:31.779691  650937 ssh_runner.go:195] Run: which crictl
	I1025 10:22:31.783608  650937 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:22:31.809684  650937 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:22:31.809759  650937 ssh_runner.go:195] Run: crio --version
	I1025 10:22:31.841199  650937 ssh_runner.go:195] Run: crio --version
	I1025 10:22:31.874396  650937 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:22:31.875887  650937 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:22:31.894932  650937 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:22:31.899692  650937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:22:31.911140  650937 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:22:31.911272  650937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:22:31.911348  650937 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:22:31.948425  650937 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:22:31.948449  650937 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:22:31.948513  650937 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:22:31.974990  650937 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:22:31.975013  650937 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:22:31.975021  650937 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:22:31.975177  650937 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:22:31.975265  650937 ssh_runner.go:195] Run: crio config
	I1025 10:22:32.023037  650937 cni.go:84] Creating CNI manager for ""
	I1025 10:22:32.023058  650937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:22:32.023088  650937 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:22:32.023122  650937 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:22:32.023280  650937 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:22:32.023373  650937 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:22:32.032302  650937 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:22:32.032384  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:22:32.040941  650937 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:22:32.054665  650937 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:22:32.068612  650937 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:22:32.082508  650937 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:22:32.086585  650937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:22:32.097751  650937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:22:32.175518  650937 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:22:32.202070  650937 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:22:32.202095  650937 certs.go:195] generating shared ca certs ...
	I1025 10:22:32.202122  650937 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.202273  650937 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:22:32.202330  650937 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:22:32.202346  650937 certs.go:257] generating profile certs ...
	I1025 10:22:32.202433  650937 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:22:32.202500  650937 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:22:32.202541  650937 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:22:32.202646  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:22:32.202676  650937 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:22:32.202704  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:22:32.202728  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:22:32.202800  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:22:32.202834  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:22:32.202873  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:22:32.203433  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:22:32.223965  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:22:32.244737  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:22:32.266559  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:22:32.292247  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:22:32.312464  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:22:32.333092  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:22:32.352618  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:22:32.372363  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:22:32.392680  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:22:32.413281  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:22:32.431923  650937 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:22:32.446205  650937 ssh_runner.go:195] Run: openssl version
	I1025 10:22:32.452911  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:22:32.462222  650937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:22:32.466305  650937 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:22:32.466398  650937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:22:32.501395  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:22:32.510850  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:22:32.520220  650937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:22:32.524259  650937 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:22:32.524336  650937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:22:32.559601  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:22:32.568831  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:22:32.578293  650937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:22:32.582771  650937 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:22:32.582837  650937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:22:32.618701  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:22:32.628354  650937 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:22:32.632792  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:22:32.667884  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:22:32.703809  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:22:32.748759  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:22:32.790091  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:22:32.832785  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:22:32.886164  650937 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:22:32.886287  650937 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:22:32.886397  650937 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:22:32.919354  650937 cri.go:89] found id: "e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d"
	I1025 10:22:32.919385  650937 cri.go:89] found id: "dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6"
	I1025 10:22:32.919392  650937 cri.go:89] found id: "34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58"
	I1025 10:22:32.919398  650937 cri.go:89] found id: "a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4"
	I1025 10:22:32.919403  650937 cri.go:89] found id: ""
	I1025 10:22:32.919452  650937 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:22:32.933811  650937 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:32Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:22:32.933887  650937 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:22:32.943122  650937 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:22:32.943144  650937 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:22:32.943187  650937 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:22:32.951782  650937 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:22:32.952218  650937 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-683681" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:22:32.952375  650937 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-683681" cluster setting kubeconfig missing "embed-certs-683681" context setting]
	I1025 10:22:32.952732  650937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.953961  650937 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:22:32.962582  650937 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 10:22:32.962624  650937 kubeadm.go:601] duration metric: took 19.474145ms to restartPrimaryControlPlane
	I1025 10:22:32.962636  650937 kubeadm.go:402] duration metric: took 76.485212ms to StartCluster
	I1025 10:22:32.962656  650937 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.962731  650937 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:22:32.963916  650937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.964199  650937 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:22:32.964304  650937 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:22:32.964453  650937 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:22:32.964458  650937 addons.go:69] Setting dashboard=true in profile "embed-certs-683681"
	I1025 10:22:32.964476  650937 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:22:32.964482  650937 addons.go:238] Setting addon dashboard=true in "embed-certs-683681"
	W1025 10:22:32.964489  650937 addons.go:247] addon storage-provisioner should already be in state true
	W1025 10:22:32.964490  650937 addons.go:247] addon dashboard should already be in state true
	I1025 10:22:32.964495  650937 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:22:32.964521  650937 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:22:32.964522  650937 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:22:32.964534  650937 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:22:32.964553  650937 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:32.964888  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.964914  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.965022  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.970498  650937 out.go:179] * Verifying Kubernetes components...
	I1025 10:22:32.972008  650937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:22:32.990938  650937 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	W1025 10:22:32.990972  650937 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:22:32.991000  650937 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:22:32.991472  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.991497  650937 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:22:32.991505  650937 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:22:32.992867  650937 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:22:32.992890  650937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:22:32.992898  650937 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:22:32.992950  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:32.994388  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:22:32.994409  650937 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:22:32.994728  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:33.023208  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:33.030495  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:33.031038  650937 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:22:33.031059  650937 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:22:33.031123  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:33.058117  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:33.132725  650937 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:22:33.150038  650937 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:22:33.155049  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:22:33.155076  650937 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:22:33.155978  650937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:22:33.171698  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:22:33.171733  650937 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:22:33.175020  650937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:22:33.188568  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:22:33.188599  650937 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:22:33.203598  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:22:33.203625  650937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:22:33.221077  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:22:33.221104  650937 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:22:33.237697  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:22:33.237728  650937 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:22:33.254956  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:22:33.254983  650937 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:22:33.270158  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:22:33.270186  650937 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:22:33.285514  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:22:33.285540  650937 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:22:33.300927  650937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:22:34.616048  650937 node_ready.go:49] node "embed-certs-683681" is "Ready"
	I1025 10:22:34.616087  650937 node_ready.go:38] duration metric: took 1.466004388s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:22:34.616105  650937 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:22:34.616160  650937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:22:35.164539  650937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.008522975s)
	I1025 10:22:35.164613  650937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.989556492s)
	I1025 10:22:35.164725  650937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.863750398s)
	I1025 10:22:35.164740  650937 api_server.go:72] duration metric: took 2.200509022s to wait for apiserver process to appear ...
	I1025 10:22:35.164752  650937 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:22:35.164783  650937 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:22:35.166463  650937 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-683681 addons enable metrics-server
	
	I1025 10:22:35.172411  650937 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:22:35.172439  650937 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:22:35.180769  650937 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:22:35.182025  650937 addons.go:514] duration metric: took 2.217723351s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:22:35.665533  650937 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:22:35.671086  650937 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:22:35.671117  650937 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:22:36.165691  650937 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:22:36.170289  650937 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:22:36.171425  650937 api_server.go:141] control plane version: v1.34.1
	I1025 10:22:36.171457  650937 api_server.go:131] duration metric: took 1.006692122s to wait for apiserver health ...
	I1025 10:22:36.171467  650937 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:22:36.175737  650937 system_pods.go:59] 8 kube-system pods found
	I1025 10:22:36.175775  650937 system_pods.go:61] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:22:36.175783  650937 system_pods.go:61] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:22:36.175794  650937 system_pods.go:61] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:22:36.175801  650937 system_pods.go:61] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:22:36.175807  650937 system_pods.go:61] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:22:36.175813  650937 system_pods.go:61] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:22:36.175823  650937 system_pods.go:61] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:22:36.175830  650937 system_pods.go:61] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:22:36.175838  650937 system_pods.go:74] duration metric: took 4.363944ms to wait for pod list to return data ...
	I1025 10:22:36.175851  650937 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:22:36.178938  650937 default_sa.go:45] found service account: "default"
	I1025 10:22:36.178969  650937 default_sa.go:55] duration metric: took 3.109602ms for default service account to be created ...
	I1025 10:22:36.178983  650937 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:22:36.182267  650937 system_pods.go:86] 8 kube-system pods found
	I1025 10:22:36.182308  650937 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:22:36.182335  650937 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:22:36.182346  650937 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:22:36.182357  650937 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:22:36.182365  650937 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:22:36.182373  650937 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:22:36.182378  650937 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:22:36.182383  650937 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:22:36.182390  650937 system_pods.go:126] duration metric: took 3.401116ms to wait for k8s-apps to be running ...
	I1025 10:22:36.182401  650937 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:22:36.182446  650937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:22:36.196787  650937 system_svc.go:56] duration metric: took 14.374597ms WaitForService to wait for kubelet
	I1025 10:22:36.196824  650937 kubeadm.go:586] duration metric: took 3.232594248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:22:36.196856  650937 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:22:36.200108  650937 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:22:36.200140  650937 node_conditions.go:123] node cpu capacity is 8
	I1025 10:22:36.200158  650937 node_conditions.go:105] duration metric: took 3.297241ms to run NodePressure ...
	I1025 10:22:36.200171  650937 start.go:241] waiting for startup goroutines ...
	I1025 10:22:36.200177  650937 start.go:246] waiting for cluster config update ...
	I1025 10:22:36.200187  650937 start.go:255] writing updated cluster config ...
	I1025 10:22:36.200488  650937 ssh_runner.go:195] Run: rm -f paused
	I1025 10:22:36.204706  650937 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:22:36.208346  650937 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:22:38.216664  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:40.715388  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:43.215045  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:45.714426  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:47.714598  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:50.213835  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:52.214679  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:54.714775  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:57.214133  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:59.214411  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:01.214712  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:03.713972  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:05.714426  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:08.214136  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:10.216904  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	I1025 10:23:10.714127  650937 pod_ready.go:94] pod "coredns-66bc5c9577-545dp" is "Ready"
	I1025 10:23:10.714153  650937 pod_ready.go:86] duration metric: took 34.505786729s for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.717139  650937 pod_ready.go:83] waiting for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.723930  650937 pod_ready.go:94] pod "etcd-embed-certs-683681" is "Ready"
	I1025 10:23:10.723954  650937 pod_ready.go:86] duration metric: took 6.78996ms for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.726041  650937 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.729916  650937 pod_ready.go:94] pod "kube-apiserver-embed-certs-683681" is "Ready"
	I1025 10:23:10.729938  650937 pod_ready.go:86] duration metric: took 3.876121ms for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.731795  650937 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.912596  650937 pod_ready.go:94] pod "kube-controller-manager-embed-certs-683681" is "Ready"
	I1025 10:23:10.912657  650937 pod_ready.go:86] duration metric: took 180.841663ms for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:11.112089  650937 pod_ready.go:83] waiting for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:11.512096  650937 pod_ready.go:94] pod "kube-proxy-dbks6" is "Ready"
	I1025 10:23:11.512124  650937 pod_ready.go:86] duration metric: took 400.009257ms for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:11.712447  650937 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:12.112428  650937 pod_ready.go:94] pod "kube-scheduler-embed-certs-683681" is "Ready"
	I1025 10:23:12.112457  650937 pod_ready.go:86] duration metric: took 399.97805ms for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:12.112470  650937 pod_ready.go:40] duration metric: took 35.907729209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:23:12.158819  650937 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:23:12.161040  650937 out.go:179] * Done! kubectl is now configured to use "embed-certs-683681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.158850674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.158883955Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.158912193Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.163160984Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.163202354Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.163247114Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.167484693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.167524941Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.167552251Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.171954727Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.171989617Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.172014328Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.176344528Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.176385829Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.299808509Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=00b12e10-779a-4f5c-b0fb-b0e7916c300b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.302588818Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cf888b1d-d103-4f64-a006-fa8ea8cb019e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.305687984Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper" id=5152b775-1558-4625-87b5-1b1a82abf12b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.305867485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.31411453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.314699599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.347463927Z" level=info msg="Created container 488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper" id=5152b775-1558-4625-87b5-1b1a82abf12b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.348255533Z" level=info msg="Starting container: 488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f" id=16ef52ce-670e-49ce-9c70-277b4b3f279b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.350032155Z" level=info msg="Started container" PID=1776 containerID=488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper id=16ef52ce-670e-49ce-9c70-277b4b3f279b name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f66bd3e62298af75dd6cfbc6be82dd0a5f4120e24bbabde39fbf1599c7f0692
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.392865537Z" level=info msg="Removing container: 75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4" id=317144e3-3fc7-40f8-ba02-539fdaad3eaa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.40316973Z" level=info msg="Removed container 75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper" id=317144e3-3fc7-40f8-ba02-539fdaad3eaa name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	488d8e2589cf8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   7f66bd3e62298       dashboard-metrics-scraper-6ffb444bf9-7tq6h   kubernetes-dashboard
	ae91c0eace8a7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   d0c61bfe88f17       kubernetes-dashboard-855c9754f9-b2cmv        kubernetes-dashboard
	8cadb44f0328e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Running             storage-provisioner         1                   1244f314e60cb       storage-provisioner                          kube-system
	ca07c7ae252ac       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   917fd9778d098       busybox                                      default
	9508b4a27687e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   4e065699a4418       coredns-66bc5c9577-545dp                     kube-system
	5a596c77f5df5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   1244f314e60cb       storage-provisioner                          kube-system
	f7c43259b62da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   e9833649e183c       kube-proxy-dbks6                             kube-system
	0e43c9fb1569e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   1c2faaf27e736       kindnet-5zktx                                kube-system
	e23b1b78e5c41       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   15879371c0d8f       etcd-embed-certs-683681                      kube-system
	dc575cdd84b4a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   034b10356e3c2       kube-scheduler-embed-certs-683681            kube-system
	34d10690becbf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   ebe2fa40f64a7       kube-apiserver-embed-certs-683681            kube-system
	a672b9f6352db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   9b260be608e9f       kube-controller-manager-embed-certs-683681   kube-system
	
	
	==> coredns [9508b4a27687ec159979bd17e4bb05c52528b9b9205a5dd4c224cf45bbbdf857] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50425 - 36668 "HINFO IN 7802470207301936582.893117442621612942. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.085493074s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-683681
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-683681
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=embed-certs-683681
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:21:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-683681
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:23:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-683681
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b190e06d-a88f-488c-8710-85f0327cbd4d
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-545dp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-683681                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-5zktx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-683681             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-683681    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-dbks6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-683681             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7tq6h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b2cmv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-683681 event: Registered Node embed-certs-683681 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-683681 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node embed-certs-683681 event: Registered Node embed-certs-683681 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d] <==
	{"level":"warn","ts":"2025-10-25T10:22:33.989496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.005357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.011855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.018355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.026548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.032941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.040450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.046902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.055769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.064503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.070653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.077497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.084410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.090765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.097274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.103856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.110241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.117657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.124067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.130874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.137794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.158412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.165738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.172313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.223977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:23:26 up  2:05,  0 user,  load average: 1.60, 3.98, 5.47
	Linux embed-certs-683681 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e43c9fb1569ef6e07a5677d2a15b6334bc4fe7db76411edffd13663fe4716c1] <==
	I1025 10:22:35.854528       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:22:35.854801       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 10:22:35.854973       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:22:35.854989       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:22:35.855006       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:22:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:22:36.153125       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:22:36.153155       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:22:36.153168       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:22:36.153411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:22:36.553661       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:22:36.553690       1 metrics.go:72] Registering metrics
	I1025 10:22:36.553770       1 controller.go:711] "Syncing nftables rules"
	I1025 10:22:46.153255       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:22:46.153366       1 main.go:301] handling current node
	I1025 10:22:56.153277       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:22:56.153314       1 main.go:301] handling current node
	I1025 10:23:06.153300       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:23:06.153373       1 main.go:301] handling current node
	I1025 10:23:16.152933       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:23:16.152971       1 main.go:301] handling current node
	I1025 10:23:26.154440       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:23:26.154482       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58] <==
	I1025 10:22:34.687136       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:22:34.687150       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:22:34.687158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:22:34.687165       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:22:34.687156       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:22:34.687419       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:22:34.687652       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:22:34.687739       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:22:34.690831       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:22:34.691260       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1025 10:22:34.695784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:22:34.711072       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:22:34.713658       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:22:34.725501       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:22:34.966153       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:22:34.994753       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:22:35.017705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:22:35.025672       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:22:35.032376       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:22:35.068974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.100.143"}
	I1025 10:22:35.078709       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.140.29"}
	I1025 10:22:35.591621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:22:38.441204       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:22:38.491563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:22:38.592254       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4] <==
	I1025 10:22:37.998469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:22:37.999661       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:22:37.999770       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:22:38.002030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:22:38.004402       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:22:38.007642       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:22:38.009958       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:22:38.012289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:22:38.014657       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:22:38.016919       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:22:38.018588       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:22:38.021444       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:22:38.037777       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:22:38.037833       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:22:38.039000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:22:38.039034       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:22:38.039185       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:22:38.039289       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:22:38.039476       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-683681"
	I1025 10:22:38.039541       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:22:38.039675       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:22:38.039927       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:22:38.044450       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:22:38.044450       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:22:38.064755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f7c43259b62da489acda62b9d2e1e2867140658c7c81ddd6b20c46ec720bb6b6] <==
	I1025 10:22:35.712973       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:22:35.803782       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:22:35.904920       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:22:35.905026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 10:22:35.905148       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:22:35.923981       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:22:35.924059       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:22:35.929391       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:22:35.929751       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:22:35.929787       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:22:35.932591       1 config.go:200] "Starting service config controller"
	I1025 10:22:35.932614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:22:35.932597       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:22:35.932656       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:22:35.932670       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:22:35.932681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:22:35.932834       1 config.go:309] "Starting node config controller"
	I1025 10:22:35.932888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:22:35.932924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:22:36.032823       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:22:36.032883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:22:36.032884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6] <==
	I1025 10:22:33.610091       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:22:34.608781       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:22:34.608830       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:22:34.608843       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:22:34.608853       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:22:34.649983       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:22:34.650015       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:22:34.653640       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:22:34.653702       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:22:34.655536       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:22:34.656040       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:22:34.754779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:22:38 embed-certs-683681 kubelet[724]: I1025 10:22:38.746507     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea38f57e-a5bf-47fc-b9c0-d287bd1036f4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-7tq6h\" (UID: \"ea38f57e-a5bf-47fc-b9c0-d287bd1036f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h"
	Oct 25 10:22:38 embed-certs-683681 kubelet[724]: I1025 10:22:38.746620     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bfd\" (UniqueName: \"kubernetes.io/projected/ea38f57e-a5bf-47fc-b9c0-d287bd1036f4-kube-api-access-t2bfd\") pod \"dashboard-metrics-scraper-6ffb444bf9-7tq6h\" (UID: \"ea38f57e-a5bf-47fc-b9c0-d287bd1036f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h"
	Oct 25 10:22:38 embed-certs-683681 kubelet[724]: I1025 10:22:38.746686     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblh9\" (UniqueName: \"kubernetes.io/projected/104da91e-df0f-49a9-bf95-7fd18378292d-kube-api-access-lblh9\") pod \"kubernetes-dashboard-855c9754f9-b2cmv\" (UID: \"104da91e-df0f-49a9-bf95-7fd18378292d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2cmv"
	Oct 25 10:22:40 embed-certs-683681 kubelet[724]: I1025 10:22:40.644554     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:22:41 embed-certs-683681 kubelet[724]: I1025 10:22:41.344410     724 scope.go:117] "RemoveContainer" containerID="5f26e9553f776d73cefd45b0376452fecb859bf352bdb0b5a86e0f15ee46f871"
	Oct 25 10:22:42 embed-certs-683681 kubelet[724]: I1025 10:22:42.349164     724 scope.go:117] "RemoveContainer" containerID="5f26e9553f776d73cefd45b0376452fecb859bf352bdb0b5a86e0f15ee46f871"
	Oct 25 10:22:42 embed-certs-683681 kubelet[724]: I1025 10:22:42.349302     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:42 embed-certs-683681 kubelet[724]: E1025 10:22:42.349562     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:22:43 embed-certs-683681 kubelet[724]: I1025 10:22:43.353932     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:43 embed-certs-683681 kubelet[724]: E1025 10:22:43.354133     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:22:44 embed-certs-683681 kubelet[724]: I1025 10:22:44.356879     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:44 embed-certs-683681 kubelet[724]: E1025 10:22:44.357134     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:22:45 embed-certs-683681 kubelet[724]: I1025 10:22:45.372539     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2cmv" podStartSLOduration=1.686433971 podStartE2EDuration="7.372513668s" podCreationTimestamp="2025-10-25 10:22:38 +0000 UTC" firstStartedPulling="2025-10-25 10:22:38.988772835 +0000 UTC m=+6.784459906" lastFinishedPulling="2025-10-25 10:22:44.674852532 +0000 UTC m=+12.470539603" observedRunningTime="2025-10-25 10:22:45.372092175 +0000 UTC m=+13.167779289" watchObservedRunningTime="2025-10-25 10:22:45.372513668 +0000 UTC m=+13.168200761"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: I1025 10:22:56.299295     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: I1025 10:22:56.391518     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: I1025 10:22:56.391779     724 scope.go:117] "RemoveContainer" containerID="488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: E1025 10:22:56.392055     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:23:02 embed-certs-683681 kubelet[724]: I1025 10:23:02.490379     724 scope.go:117] "RemoveContainer" containerID="488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	Oct 25 10:23:02 embed-certs-683681 kubelet[724]: E1025 10:23:02.490639     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:23:15 embed-certs-683681 kubelet[724]: I1025 10:23:15.296968     724 scope.go:117] "RemoveContainer" containerID="488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	Oct 25 10:23:15 embed-certs-683681 kubelet[724]: E1025 10:23:15.297178     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: kubelet.service: Consumed 1.827s CPU time.
	
	
	==> kubernetes-dashboard [ae91c0eace8a71b1845d97507f08b3cce89463dc558fab2ed073d1b251d048a2] <==
	2025/10/25 10:22:44 Starting overwatch
	2025/10/25 10:22:44 Using namespace: kubernetes-dashboard
	2025/10/25 10:22:44 Using in-cluster config to connect to apiserver
	2025/10/25 10:22:44 Using secret token for csrf signing
	2025/10/25 10:22:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:22:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:22:44 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:22:44 Generating JWE encryption key
	2025/10/25 10:22:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:22:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:22:44 Initializing JWE encryption key from synchronized object
	2025/10/25 10:22:44 Creating in-cluster Sidecar client
	2025/10/25 10:22:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:22:44 Serving insecurely on HTTP port: 9090
	2025/10/25 10:23:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5a596c77f5df556e869709b8cf5dcb9c78dc06441ded8c2f7831e35736644375] <==
	I1025 10:22:35.674811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:22:35.678932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [8cadb44f0328e5bdc8a75a15aa015760ee35f78df670f55e688fcc7b1659aeef] <==
	W1025 10:23:01.828847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:03.832398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:03.837778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:05.841245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:05.845469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:07.849795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:07.854436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:09.858341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:09.864218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:11.867927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:11.872860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:13.876565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:13.882465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:15.885810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:15.892041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:17.896543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:17.901182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:19.905299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:19.913671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:21.917360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:21.921866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:23.925519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:23.930269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:25.934510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:25.939738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-683681 -n embed-certs-683681
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-683681 -n embed-certs-683681: exit status 2 (359.472846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-683681 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-683681
helpers_test.go:243: (dbg) docker inspect embed-certs-683681:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878",
	        "Created": "2025-10-25T10:21:16.235046016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 651136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:22:26.068501528Z",
	            "FinishedAt": "2025-10-25T10:22:25.128540678Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/hostname",
	        "HostsPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/hosts",
	        "LogPath": "/var/lib/docker/containers/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878/664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878-json.log",
	        "Name": "/embed-certs-683681",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-683681:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-683681",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "664aed4a01f92c3871e075fd05ccbeee5ecb48fcd448f58432e85e0dc505d878",
	                "LowerDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4-init/diff:/var/lib/docker/overlay2/9d1960ec6ab151df7efbd4d43f9ccd1aaf4e0bd9e7db4285118644a6d5a54279/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22dc02559454c5069aa97024407358906ca2c7013bf26825d319003749eb66b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-683681",
	                "Source": "/var/lib/docker/volumes/embed-certs-683681/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-683681",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-683681",
	                "name.minikube.sigs.k8s.io": "embed-certs-683681",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afd5f9ce71f28c93399bb97ba8437374eb3f6b307416eb354a32ca1583210d02",
	            "SandboxKey": "/var/run/docker/netns/afd5f9ce71f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-683681": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:7f:4b:23:62:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afda803609319b40fede74121fd584f53a0a22be2a797d9c1be1e1370a5a8dff",
	                    "EndpointID": "df4d9db28a156fa3904fa5a42edd76ec577c26329a871f9272cdfbeef93a64ed",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-683681",
	                        "664aed4a01f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681: exit status 2 (340.014234ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-683681 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-683681 logs -n 25: (1.138716519s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p newest-cni-667966 --alsologtostderr -v=1                                                                                                                              │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                     │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p newest-cni-667966                                                                                                                                                     │ newest-cni-667966            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-805899                                                                                                                                          │ disable-driver-mounts-805899 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ old-k8s-version-714798 image list --format=json                                                                                                                          │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p old-k8s-version-714798 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-714798                                                                                                                                                │ old-k8s-version-714798       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ image   │ no-preload-899665 image list --format=json                                                                                                                               │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ pause   │ -p no-preload-899665 --alsologtostderr -v=1                                                                                                                              │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                     │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-767846 image list --format=json                                                                                                                    │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-767846 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p no-preload-899665                                                                                                                                                     │ no-preload-899665            │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-683681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-767846                                                                                                                                          │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ stop    │ -p embed-certs-683681 --alsologtostderr -v=3                                                                                                                             │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ delete  │ -p default-k8s-diff-port-767846                                                                                                                                          │ default-k8s-diff-port-767846 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-683681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ start   │ -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:23 UTC │
	│ image   │ embed-certs-683681 image list --format=json                                                                                                                              │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ pause   │ -p embed-certs-683681 --alsologtostderr -v=1                                                                                                                             │ embed-certs-683681           │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:22:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:22:25.811526  650937 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:22:25.811824  650937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:25.811836  650937 out.go:374] Setting ErrFile to fd 2...
	I1025 10:22:25.811841  650937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:25.812034  650937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:22:25.812528  650937 out.go:368] Setting JSON to false
	I1025 10:22:25.813671  650937 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7495,"bootTime":1761380251,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:22:25.813796  650937 start.go:141] virtualization: kvm guest
	I1025 10:22:25.816027  650937 out.go:179] * [embed-certs-683681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:22:25.817628  650937 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:22:25.817669  650937 notify.go:220] Checking for updates...
	I1025 10:22:25.820589  650937 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:22:25.821848  650937 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:22:25.823064  650937 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:22:25.824573  650937 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:22:25.825915  650937 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:22:25.828050  650937 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:25.828919  650937 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:22:25.855578  650937 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:22:25.855692  650937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:22:25.918056  650937 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 10:22:25.906868562 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:22:25.918205  650937 docker.go:318] overlay module found
	I1025 10:22:25.920390  650937 out.go:179] * Using the docker driver based on existing profile
	I1025 10:22:25.921798  650937 start.go:305] selected driver: docker
	I1025 10:22:25.921824  650937 start.go:925] validating driver "docker" against &{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:22:25.921957  650937 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:22:25.922800  650937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:22:25.989584  650937 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 10:22:25.978026276 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:22:25.989904  650937 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:22:25.989940  650937 cni.go:84] Creating CNI manager for ""
	I1025 10:22:25.989975  650937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:22:25.990022  650937 start.go:349] cluster config:
	{Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:22:25.992139  650937 out.go:179] * Starting "embed-certs-683681" primary control-plane node in "embed-certs-683681" cluster
	I1025 10:22:25.993435  650937 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:22:25.994691  650937 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:22:25.995868  650937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:22:25.995916  650937 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:22:25.995925  650937 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 10:22:25.995965  650937 cache.go:58] Caching tarball of preloaded images
	I1025 10:22:25.996079  650937 preload.go:233] Found /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 10:22:25.996092  650937 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:22:25.996218  650937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:22:26.018405  650937 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:22:26.018433  650937 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:22:26.018459  650937 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:22:26.018491  650937 start.go:360] acquireMachinesLock for embed-certs-683681: {Name:mkb49d854e007783568583b216321c2ada753d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:22:26.018597  650937 start.go:364] duration metric: took 58.454µs to acquireMachinesLock for "embed-certs-683681"
	I1025 10:22:26.018625  650937 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:22:26.018637  650937 fix.go:54] fixHost starting: 
	I1025 10:22:26.018950  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:26.037651  650937 fix.go:112] recreateIfNeeded on embed-certs-683681: state=Stopped err=<nil>
	W1025 10:22:26.037685  650937 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:22:26.039795  650937 out.go:252] * Restarting existing docker container for "embed-certs-683681" ...
	I1025 10:22:26.039883  650937 cli_runner.go:164] Run: docker start embed-certs-683681
	I1025 10:22:26.298888  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:26.322052  650937 kic.go:430] container "embed-certs-683681" state is running.
	I1025 10:22:26.322558  650937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:22:26.342786  650937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/config.json ...
	I1025 10:22:26.343059  650937 machine.go:93] provisionDockerMachine start ...
	I1025 10:22:26.343126  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:26.363148  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:26.363460  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:26.363477  650937 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:22:26.364238  650937 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42172->127.0.0.1:33133: read: connection reset by peer
	I1025 10:22:29.510043  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:22:29.510096  650937 ubuntu.go:182] provisioning hostname "embed-certs-683681"
	I1025 10:22:29.510185  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:29.529913  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:29.530146  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:29.530159  650937 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-683681 && echo "embed-certs-683681" | sudo tee /etc/hostname
	I1025 10:22:29.686569  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-683681
	
	I1025 10:22:29.686645  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:29.706958  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:29.707260  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:29.707306  650937 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-683681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-683681/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-683681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:22:29.851908  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:22:29.851946  650937 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-321838/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-321838/.minikube}
	I1025 10:22:29.851970  650937 ubuntu.go:190] setting up certificates
	I1025 10:22:29.851989  650937 provision.go:84] configureAuth start
	I1025 10:22:29.852043  650937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:22:29.871251  650937 provision.go:143] copyHostCerts
	I1025 10:22:29.871352  650937 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem, removing ...
	I1025 10:22:29.871379  650937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem
	I1025 10:22:29.871471  650937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/ca.pem (1078 bytes)
	I1025 10:22:29.871635  650937 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem, removing ...
	I1025 10:22:29.871678  650937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem
	I1025 10:22:29.871729  650937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/cert.pem (1123 bytes)
	I1025 10:22:29.871814  650937 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem, removing ...
	I1025 10:22:29.871822  650937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem
	I1025 10:22:29.871863  650937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-321838/.minikube/key.pem (1679 bytes)
	I1025 10:22:29.872085  650937 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem org=jenkins.embed-certs-683681 san=[127.0.0.1 192.168.94.2 embed-certs-683681 localhost minikube]
	I1025 10:22:29.984343  650937 provision.go:177] copyRemoteCerts
	I1025 10:22:29.984415  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:22:29.984456  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.003605  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.106691  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:22:30.125676  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 10:22:30.145055  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:22:30.164000  650937 provision.go:87] duration metric: took 311.99694ms to configureAuth
	I1025 10:22:30.164030  650937 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:22:30.164234  650937 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:30.164356  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.183441  650937 main.go:141] libmachine: Using SSH client type: native
	I1025 10:22:30.183697  650937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 10:22:30.183724  650937 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:22:30.491506  650937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:22:30.491534  650937 machine.go:96] duration metric: took 4.148458506s to provisionDockerMachine
	I1025 10:22:30.491550  650937 start.go:293] postStartSetup for "embed-certs-683681" (driver="docker")
	I1025 10:22:30.491566  650937 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:22:30.491634  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:22:30.491687  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.511988  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.616719  650937 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:22:30.620710  650937 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:22:30.620740  650937 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:22:30.620754  650937 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/addons for local assets ...
	I1025 10:22:30.620807  650937 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-321838/.minikube/files for local assets ...
	I1025 10:22:30.620876  650937 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem -> 3254552.pem in /etc/ssl/certs
	I1025 10:22:30.620973  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:22:30.629162  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:22:30.648583  650937 start.go:296] duration metric: took 157.013923ms for postStartSetup
	I1025 10:22:30.648667  650937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:22:30.648705  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.667816  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.768186  650937 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:22:30.773180  650937 fix.go:56] duration metric: took 4.754534958s for fixHost
	I1025 10:22:30.773214  650937 start.go:83] releasing machines lock for "embed-certs-683681", held for 4.754601126s
	I1025 10:22:30.773296  650937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-683681
	I1025 10:22:30.792498  650937 ssh_runner.go:195] Run: cat /version.json
	I1025 10:22:30.792549  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.792594  650937 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:22:30.792699  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:30.812116  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.812288  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:30.965514  650937 ssh_runner.go:195] Run: systemctl --version
	I1025 10:22:30.972715  650937 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:22:31.012006  650937 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:22:31.017272  650937 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:22:31.017362  650937 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:22:31.026209  650937 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:22:31.026242  650937 start.go:495] detecting cgroup driver to use...
	I1025 10:22:31.026283  650937 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:22:31.026350  650937 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:22:31.042521  650937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:22:31.056334  650937 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:22:31.056406  650937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:22:31.073008  650937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:22:31.087153  650937 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:22:31.175207  650937 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:22:31.256726  650937 docker.go:234] disabling docker service ...
	I1025 10:22:31.256796  650937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:22:31.272066  650937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:22:31.285614  650937 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:22:31.367461  650937 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:22:31.449361  650937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:22:31.463666  650937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:22:31.479927  650937 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:22:31.479993  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.490565  650937 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 10:22:31.490649  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.500815  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.510530  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.520022  650937 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:22:31.529061  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.538958  650937 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.548107  650937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:22:31.557729  650937 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:22:31.565991  650937 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:22:31.574556  650937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:22:31.657549  650937 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:22:31.775056  650937 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:22:31.775132  650937 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:22:31.779628  650937 start.go:563] Will wait 60s for crictl version
	I1025 10:22:31.779691  650937 ssh_runner.go:195] Run: which crictl
	I1025 10:22:31.783608  650937 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:22:31.809684  650937 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:22:31.809759  650937 ssh_runner.go:195] Run: crio --version
	I1025 10:22:31.841199  650937 ssh_runner.go:195] Run: crio --version
	I1025 10:22:31.874396  650937 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:22:31.875887  650937 cli_runner.go:164] Run: docker network inspect embed-certs-683681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:22:31.894932  650937 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 10:22:31.899692  650937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:22:31.911140  650937 kubeadm.go:883] updating cluster {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:22:31.911272  650937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:22:31.911348  650937 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:22:31.948425  650937 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:22:31.948449  650937 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:22:31.948513  650937 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:22:31.974990  650937 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:22:31.975013  650937 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:22:31.975021  650937 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1025 10:22:31.975177  650937 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-683681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:22:31.975265  650937 ssh_runner.go:195] Run: crio config
	I1025 10:22:32.023037  650937 cni.go:84] Creating CNI manager for ""
	I1025 10:22:32.023058  650937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:22:32.023088  650937 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:22:32.023122  650937 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-683681 NodeName:embed-certs-683681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:22:32.023280  650937 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-683681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:22:32.023373  650937 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:22:32.032302  650937 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:22:32.032384  650937 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:22:32.040941  650937 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:22:32.054665  650937 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:22:32.068612  650937 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:22:32.082508  650937 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:22:32.086585  650937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:22:32.097751  650937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:22:32.175518  650937 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:22:32.202070  650937 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681 for IP: 192.168.94.2
	I1025 10:22:32.202095  650937 certs.go:195] generating shared ca certs ...
	I1025 10:22:32.202122  650937 certs.go:227] acquiring lock for ca certs: {Name:mke559c04eea9c265cb2a7beab0da125bee52db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.202273  650937 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key
	I1025 10:22:32.202330  650937 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key
	I1025 10:22:32.202346  650937 certs.go:257] generating profile certs ...
	I1025 10:22:32.202433  650937 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/client.key
	I1025 10:22:32.202500  650937 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key.b6974f81
	I1025 10:22:32.202541  650937 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key
	I1025 10:22:32.202646  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem (1338 bytes)
	W1025 10:22:32.202676  650937 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455_empty.pem, impossibly tiny 0 bytes
	I1025 10:22:32.202704  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:22:32.202728  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:22:32.202800  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:22:32.202834  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/certs/key.pem (1679 bytes)
	I1025 10:22:32.202873  650937 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem (1708 bytes)
	I1025 10:22:32.203433  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:22:32.223965  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 10:22:32.244737  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:22:32.266559  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:22:32.292247  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:22:32.312464  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:22:32.333092  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:22:32.352618  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/embed-certs-683681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:22:32.372363  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/ssl/certs/3254552.pem --> /usr/share/ca-certificates/3254552.pem (1708 bytes)
	I1025 10:22:32.392680  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:22:32.413281  650937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-321838/.minikube/certs/325455.pem --> /usr/share/ca-certificates/325455.pem (1338 bytes)
	I1025 10:22:32.431923  650937 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:22:32.446205  650937 ssh_runner.go:195] Run: openssl version
	I1025 10:22:32.452911  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3254552.pem && ln -fs /usr/share/ca-certificates/3254552.pem /etc/ssl/certs/3254552.pem"
	I1025 10:22:32.462222  650937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3254552.pem
	I1025 10:22:32.466305  650937 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:38 /usr/share/ca-certificates/3254552.pem
	I1025 10:22:32.466398  650937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3254552.pem
	I1025 10:22:32.501395  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3254552.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:22:32.510850  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:22:32.520220  650937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:22:32.524259  650937 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:22:32.524336  650937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:22:32.559601  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:22:32.568831  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/325455.pem && ln -fs /usr/share/ca-certificates/325455.pem /etc/ssl/certs/325455.pem"
	I1025 10:22:32.578293  650937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/325455.pem
	I1025 10:22:32.582771  650937 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:38 /usr/share/ca-certificates/325455.pem
	I1025 10:22:32.582837  650937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/325455.pem
	I1025 10:22:32.618701  650937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/325455.pem /etc/ssl/certs/51391683.0"
	I1025 10:22:32.628354  650937 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:22:32.632792  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:22:32.667884  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:22:32.703809  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:22:32.748759  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:22:32.790091  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:22:32.832785  650937 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:22:32.886164  650937 kubeadm.go:400] StartCluster: {Name:embed-certs-683681 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-683681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:22:32.886287  650937 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:22:32.886397  650937 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:22:32.919354  650937 cri.go:89] found id: "e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d"
	I1025 10:22:32.919385  650937 cri.go:89] found id: "dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6"
	I1025 10:22:32.919392  650937 cri.go:89] found id: "34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58"
	I1025 10:22:32.919398  650937 cri.go:89] found id: "a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4"
	I1025 10:22:32.919403  650937 cri.go:89] found id: ""
	I1025 10:22:32.919452  650937 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:22:32.933811  650937 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:22:32Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:22:32.933887  650937 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:22:32.943122  650937 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:22:32.943144  650937 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:22:32.943187  650937 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:22:32.951782  650937 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:22:32.952218  650937 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-683681" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:22:32.952375  650937 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-321838/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-683681" cluster setting kubeconfig missing "embed-certs-683681" context setting]
	I1025 10:22:32.952732  650937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.953961  650937 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:22:32.962582  650937 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 10:22:32.962624  650937 kubeadm.go:601] duration metric: took 19.474145ms to restartPrimaryControlPlane
	I1025 10:22:32.962636  650937 kubeadm.go:402] duration metric: took 76.485212ms to StartCluster
	I1025 10:22:32.962656  650937 settings.go:142] acquiring lock: {Name:mkccf1ae03faaf532633e370e89b56d0fc3d2b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.962731  650937 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:22:32.963916  650937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/kubeconfig: {Name:mkdf45b680efe6cfa59b0c430a787dbd9940c379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:22:32.964199  650937 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:22:32.964304  650937 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:22:32.964453  650937 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-683681"
	I1025 10:22:32.964458  650937 addons.go:69] Setting dashboard=true in profile "embed-certs-683681"
	I1025 10:22:32.964476  650937 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-683681"
	I1025 10:22:32.964482  650937 addons.go:238] Setting addon dashboard=true in "embed-certs-683681"
	W1025 10:22:32.964489  650937 addons.go:247] addon storage-provisioner should already be in state true
	W1025 10:22:32.964490  650937 addons.go:247] addon dashboard should already be in state true
	I1025 10:22:32.964495  650937 addons.go:69] Setting default-storageclass=true in profile "embed-certs-683681"
	I1025 10:22:32.964521  650937 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:22:32.964522  650937 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:22:32.964534  650937 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-683681"
	I1025 10:22:32.964553  650937 config.go:182] Loaded profile config "embed-certs-683681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:32.964888  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.964914  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.965022  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.970498  650937 out.go:179] * Verifying Kubernetes components...
	I1025 10:22:32.972008  650937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:22:32.990938  650937 addons.go:238] Setting addon default-storageclass=true in "embed-certs-683681"
	W1025 10:22:32.990972  650937 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:22:32.991000  650937 host.go:66] Checking if "embed-certs-683681" exists ...
	I1025 10:22:32.991472  650937 cli_runner.go:164] Run: docker container inspect embed-certs-683681 --format={{.State.Status}}
	I1025 10:22:32.991497  650937 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:22:32.991505  650937 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:22:32.992867  650937 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:22:32.992890  650937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:22:32.992898  650937 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:22:32.992950  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:32.994388  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:22:32.994409  650937 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:22:32.994728  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:33.023208  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:33.030495  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:33.031038  650937 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:22:33.031059  650937 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:22:33.031123  650937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-683681
	I1025 10:22:33.058117  650937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/embed-certs-683681/id_rsa Username:docker}
	I1025 10:22:33.132725  650937 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:22:33.150038  650937 node_ready.go:35] waiting up to 6m0s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:22:33.155049  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:22:33.155076  650937 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:22:33.155978  650937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:22:33.171698  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:22:33.171733  650937 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:22:33.175020  650937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:22:33.188568  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:22:33.188599  650937 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:22:33.203598  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:22:33.203625  650937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:22:33.221077  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:22:33.221104  650937 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:22:33.237697  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:22:33.237728  650937 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:22:33.254956  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:22:33.254983  650937 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:22:33.270158  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:22:33.270186  650937 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:22:33.285514  650937 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:22:33.285540  650937 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:22:33.300927  650937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:22:34.616048  650937 node_ready.go:49] node "embed-certs-683681" is "Ready"
	I1025 10:22:34.616087  650937 node_ready.go:38] duration metric: took 1.466004388s for node "embed-certs-683681" to be "Ready" ...
	I1025 10:22:34.616105  650937 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:22:34.616160  650937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:22:35.164539  650937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.008522975s)
	I1025 10:22:35.164613  650937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.989556492s)
	I1025 10:22:35.164725  650937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.863750398s)
	I1025 10:22:35.164740  650937 api_server.go:72] duration metric: took 2.200509022s to wait for apiserver process to appear ...
	I1025 10:22:35.164752  650937 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:22:35.164783  650937 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:22:35.166463  650937 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-683681 addons enable metrics-server
	
	I1025 10:22:35.172411  650937 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:22:35.172439  650937 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:22:35.180769  650937 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:22:35.182025  650937 addons.go:514] duration metric: took 2.217723351s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:22:35.665533  650937 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:22:35.671086  650937 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:22:35.671117  650937 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:22:36.165691  650937 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 10:22:36.170289  650937 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 10:22:36.171425  650937 api_server.go:141] control plane version: v1.34.1
	I1025 10:22:36.171457  650937 api_server.go:131] duration metric: took 1.006692122s to wait for apiserver health ...
	I1025 10:22:36.171467  650937 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:22:36.175737  650937 system_pods.go:59] 8 kube-system pods found
	I1025 10:22:36.175775  650937 system_pods.go:61] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:22:36.175783  650937 system_pods.go:61] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:22:36.175794  650937 system_pods.go:61] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:22:36.175801  650937 system_pods.go:61] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:22:36.175807  650937 system_pods.go:61] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:22:36.175813  650937 system_pods.go:61] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:22:36.175823  650937 system_pods.go:61] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:22:36.175830  650937 system_pods.go:61] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:22:36.175838  650937 system_pods.go:74] duration metric: took 4.363944ms to wait for pod list to return data ...
	I1025 10:22:36.175851  650937 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:22:36.178938  650937 default_sa.go:45] found service account: "default"
	I1025 10:22:36.178969  650937 default_sa.go:55] duration metric: took 3.109602ms for default service account to be created ...
	I1025 10:22:36.178983  650937 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:22:36.182267  650937 system_pods.go:86] 8 kube-system pods found
	I1025 10:22:36.182308  650937 system_pods.go:89] "coredns-66bc5c9577-545dp" [a2709fe3-a1d1-4394-8cf7-3776dc8fd318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:22:36.182335  650937 system_pods.go:89] "etcd-embed-certs-683681" [efd93203-1fbf-495a-8d60-73421d0a6d9c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:22:36.182346  650937 system_pods.go:89] "kindnet-5zktx" [3398616a-6eb4-432e-bb84-ae1f166c7e71] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:22:36.182357  650937 system_pods.go:89] "kube-apiserver-embed-certs-683681" [0ff30802-f9d1-465d-8d94-769528b99497] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:22:36.182365  650937 system_pods.go:89] "kube-controller-manager-embed-certs-683681" [b794e426-ac8e-48d6-b9e3-38998ed4f272] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:22:36.182373  650937 system_pods.go:89] "kube-proxy-dbks6" [551b9ca3-e53d-4be0-bcb5-b96d76be6c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:22:36.182378  650937 system_pods.go:89] "kube-scheduler-embed-certs-683681" [97ed6526-e7b5-4086-a584-7e0cb301fb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:22:36.182383  650937 system_pods.go:89] "storage-provisioner" [42d81686-dd78-4ed1-9ead-cbcdca1d14ce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:22:36.182390  650937 system_pods.go:126] duration metric: took 3.401116ms to wait for k8s-apps to be running ...
	I1025 10:22:36.182401  650937 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:22:36.182446  650937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:22:36.196787  650937 system_svc.go:56] duration metric: took 14.374597ms WaitForService to wait for kubelet
	I1025 10:22:36.196824  650937 kubeadm.go:586] duration metric: took 3.232594248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:22:36.196856  650937 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:22:36.200108  650937 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:22:36.200140  650937 node_conditions.go:123] node cpu capacity is 8
	I1025 10:22:36.200158  650937 node_conditions.go:105] duration metric: took 3.297241ms to run NodePressure ...
	I1025 10:22:36.200171  650937 start.go:241] waiting for startup goroutines ...
	I1025 10:22:36.200177  650937 start.go:246] waiting for cluster config update ...
	I1025 10:22:36.200187  650937 start.go:255] writing updated cluster config ...
	I1025 10:22:36.200488  650937 ssh_runner.go:195] Run: rm -f paused
	I1025 10:22:36.204706  650937 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:22:36.208346  650937 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:22:38.216664  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:40.715388  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:43.215045  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:45.714426  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:47.714598  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:50.213835  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:52.214679  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:54.714775  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:57.214133  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:22:59.214411  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:01.214712  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:03.713972  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:05.714426  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:08.214136  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	W1025 10:23:10.216904  650937 pod_ready.go:104] pod "coredns-66bc5c9577-545dp" is not "Ready", error: <nil>
	I1025 10:23:10.714127  650937 pod_ready.go:94] pod "coredns-66bc5c9577-545dp" is "Ready"
	I1025 10:23:10.714153  650937 pod_ready.go:86] duration metric: took 34.505786729s for pod "coredns-66bc5c9577-545dp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.717139  650937 pod_ready.go:83] waiting for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.723930  650937 pod_ready.go:94] pod "etcd-embed-certs-683681" is "Ready"
	I1025 10:23:10.723954  650937 pod_ready.go:86] duration metric: took 6.78996ms for pod "etcd-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.726041  650937 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.729916  650937 pod_ready.go:94] pod "kube-apiserver-embed-certs-683681" is "Ready"
	I1025 10:23:10.729938  650937 pod_ready.go:86] duration metric: took 3.876121ms for pod "kube-apiserver-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.731795  650937 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:10.912596  650937 pod_ready.go:94] pod "kube-controller-manager-embed-certs-683681" is "Ready"
	I1025 10:23:10.912657  650937 pod_ready.go:86] duration metric: took 180.841663ms for pod "kube-controller-manager-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:11.112089  650937 pod_ready.go:83] waiting for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:11.512096  650937 pod_ready.go:94] pod "kube-proxy-dbks6" is "Ready"
	I1025 10:23:11.512124  650937 pod_ready.go:86] duration metric: took 400.009257ms for pod "kube-proxy-dbks6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:11.712447  650937 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:12.112428  650937 pod_ready.go:94] pod "kube-scheduler-embed-certs-683681" is "Ready"
	I1025 10:23:12.112457  650937 pod_ready.go:86] duration metric: took 399.97805ms for pod "kube-scheduler-embed-certs-683681" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:23:12.112470  650937 pod_ready.go:40] duration metric: took 35.907729209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:23:12.158819  650937 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:23:12.161040  650937 out.go:179] * Done! kubectl is now configured to use "embed-certs-683681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.158850674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.158883955Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.158912193Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.163160984Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.163202354Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.163247114Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.167484693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.167524941Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.167552251Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.171954727Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.171989617Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.172014328Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.176344528Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:22:46 embed-certs-683681 crio[567]: time="2025-10-25T10:22:46.176385829Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.299808509Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=00b12e10-779a-4f5c-b0fb-b0e7916c300b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.302588818Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cf888b1d-d103-4f64-a006-fa8ea8cb019e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.305687984Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper" id=5152b775-1558-4625-87b5-1b1a82abf12b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.305867485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.31411453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.314699599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.347463927Z" level=info msg="Created container 488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper" id=5152b775-1558-4625-87b5-1b1a82abf12b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.348255533Z" level=info msg="Starting container: 488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f" id=16ef52ce-670e-49ce-9c70-277b4b3f279b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.350032155Z" level=info msg="Started container" PID=1776 containerID=488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper id=16ef52ce-670e-49ce-9c70-277b4b3f279b name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f66bd3e62298af75dd6cfbc6be82dd0a5f4120e24bbabde39fbf1599c7f0692
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.392865537Z" level=info msg="Removing container: 75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4" id=317144e3-3fc7-40f8-ba02-539fdaad3eaa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:22:56 embed-certs-683681 crio[567]: time="2025-10-25T10:22:56.40316973Z" level=info msg="Removed container 75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h/dashboard-metrics-scraper" id=317144e3-3fc7-40f8-ba02-539fdaad3eaa name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	488d8e2589cf8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   7f66bd3e62298       dashboard-metrics-scraper-6ffb444bf9-7tq6h   kubernetes-dashboard
	ae91c0eace8a7       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   d0c61bfe88f17       kubernetes-dashboard-855c9754f9-b2cmv        kubernetes-dashboard
	8cadb44f0328e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Running             storage-provisioner         1                   1244f314e60cb       storage-provisioner                          kube-system
	ca07c7ae252ac       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   917fd9778d098       busybox                                      default
	9508b4a27687e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   4e065699a4418       coredns-66bc5c9577-545dp                     kube-system
	5a596c77f5df5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   1244f314e60cb       storage-provisioner                          kube-system
	f7c43259b62da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   e9833649e183c       kube-proxy-dbks6                             kube-system
	0e43c9fb1569e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   1c2faaf27e736       kindnet-5zktx                                kube-system
	e23b1b78e5c41       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   15879371c0d8f       etcd-embed-certs-683681                      kube-system
	dc575cdd84b4a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   034b10356e3c2       kube-scheduler-embed-certs-683681            kube-system
	34d10690becbf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   ebe2fa40f64a7       kube-apiserver-embed-certs-683681            kube-system
	a672b9f6352db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   9b260be608e9f       kube-controller-manager-embed-certs-683681   kube-system
	
	
	==> coredns [9508b4a27687ec159979bd17e4bb05c52528b9b9205a5dd4c224cf45bbbdf857] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50425 - 36668 "HINFO IN 7802470207301936582.893117442621612942. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.085493074s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-683681
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-683681
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=embed-certs-683681
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_21_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:21:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-683681
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:23:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:23:05 +0000   Sat, 25 Oct 2025 10:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-683681
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b190e06d-a88f-488c-8710-85f0327cbd4d
	  Boot ID:                    41de8bfa-0cc1-441f-80d4-c56fb8371229
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-545dp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-683681                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-5zktx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-683681             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-683681    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-dbks6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-683681             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7tq6h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b2cmv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           108s                 node-controller  Node embed-certs-683681 event: Registered Node embed-certs-683681 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-683681 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node embed-certs-683681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node embed-certs-683681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node embed-certs-683681 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-683681 event: Registered Node embed-certs-683681 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 22 1c 34 60 90 41 82 33 fb 75 87 fa 08 00
	[Oct25 10:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3d 4d bf 49 5d 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 72 b8 ab d2 81 08 06
	[ +29.291338] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 23 11 37 e3 00 08 06
	[  +0.000335] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[ +21.527050] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 89 98 95 1f c3 08 06
	[  +0.000689] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 0a 58 87 dc cf 08 06
	[Oct25 10:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[  +9.472150] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	[  +6.585715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ce 90 e9 36 a0 95 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 5c 20 b3 f5 c2 08 06
	[ +15.111475] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 5e 04 d2 54 0d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 1e 74 96 31 3c 08 06
	
	
	==> etcd [e23b1b78e5c41f9e1aede2d3b6ae6248ab011db8c6c4eb8d454bf9fb3d83c20d] <==
	{"level":"warn","ts":"2025-10-25T10:22:33.989496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.005357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.011855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.018355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.026548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.032941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.040450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.046902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.055769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.064503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.070653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.077497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.084410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.090765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.097274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.103856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.110241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.117657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.124067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.130874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.137794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.158412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.165738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.172313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:22:34.223977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:23:28 up  2:05,  0 user,  load average: 1.64, 3.95, 5.46
	Linux embed-certs-683681 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e43c9fb1569ef6e07a5677d2a15b6334bc4fe7db76411edffd13663fe4716c1] <==
	I1025 10:22:35.854528       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:22:35.854801       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 10:22:35.854973       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:22:35.854989       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:22:35.855006       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:22:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:22:36.153125       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:22:36.153155       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:22:36.153168       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:22:36.153411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:22:36.553661       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:22:36.553690       1 metrics.go:72] Registering metrics
	I1025 10:22:36.553770       1 controller.go:711] "Syncing nftables rules"
	I1025 10:22:46.153255       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:22:46.153366       1 main.go:301] handling current node
	I1025 10:22:56.153277       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:22:56.153314       1 main.go:301] handling current node
	I1025 10:23:06.153300       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:23:06.153373       1 main.go:301] handling current node
	I1025 10:23:16.152933       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:23:16.152971       1 main.go:301] handling current node
	I1025 10:23:26.154440       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 10:23:26.154482       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34d10690becbf8807247e176ac1d8a485247e95e7e43b59248e6b35de5993f58] <==
	I1025 10:22:34.687136       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:22:34.687150       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:22:34.687158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:22:34.687165       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:22:34.687156       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:22:34.687419       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:22:34.687652       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:22:34.687739       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:22:34.690831       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:22:34.691260       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1025 10:22:34.695784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:22:34.711072       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:22:34.713658       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:22:34.725501       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:22:34.966153       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:22:34.994753       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:22:35.017705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:22:35.025672       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:22:35.032376       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:22:35.068974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.100.143"}
	I1025 10:22:35.078709       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.140.29"}
	I1025 10:22:35.591621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:22:38.441204       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:22:38.491563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:22:38.592254       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a672b9f6352dbc575a968854b42894ae89478ba62caf0dddb38381973fba07e4] <==
	I1025 10:22:37.998469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:22:37.999661       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:22:37.999770       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:22:38.002030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:22:38.004402       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:22:38.007642       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:22:38.009958       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:22:38.012289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:22:38.014657       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:22:38.016919       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:22:38.018588       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:22:38.021444       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:22:38.037777       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:22:38.037833       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:22:38.039000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:22:38.039034       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:22:38.039185       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:22:38.039289       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:22:38.039476       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-683681"
	I1025 10:22:38.039541       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:22:38.039675       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:22:38.039927       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:22:38.044450       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:22:38.044450       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:22:38.064755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f7c43259b62da489acda62b9d2e1e2867140658c7c81ddd6b20c46ec720bb6b6] <==
	I1025 10:22:35.712973       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:22:35.803782       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:22:35.904920       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:22:35.905026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 10:22:35.905148       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:22:35.923981       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:22:35.924059       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:22:35.929391       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:22:35.929751       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:22:35.929787       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:22:35.932591       1 config.go:200] "Starting service config controller"
	I1025 10:22:35.932614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:22:35.932597       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:22:35.932656       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:22:35.932670       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:22:35.932681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:22:35.932834       1 config.go:309] "Starting node config controller"
	I1025 10:22:35.932888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:22:35.932924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:22:36.032823       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:22:36.032883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:22:36.032884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [dc575cdd84b4a101c9861bb4bbb3fd1c6b9365f0ddd8cf06b22b3b39ff95c2c6] <==
	I1025 10:22:33.610091       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:22:34.608781       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:22:34.608830       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:22:34.608843       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:22:34.608853       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:22:34.649983       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:22:34.650015       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:22:34.653640       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:22:34.653702       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:22:34.655536       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:22:34.656040       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:22:34.754779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:22:38 embed-certs-683681 kubelet[724]: I1025 10:22:38.746507     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea38f57e-a5bf-47fc-b9c0-d287bd1036f4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-7tq6h\" (UID: \"ea38f57e-a5bf-47fc-b9c0-d287bd1036f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h"
	Oct 25 10:22:38 embed-certs-683681 kubelet[724]: I1025 10:22:38.746620     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bfd\" (UniqueName: \"kubernetes.io/projected/ea38f57e-a5bf-47fc-b9c0-d287bd1036f4-kube-api-access-t2bfd\") pod \"dashboard-metrics-scraper-6ffb444bf9-7tq6h\" (UID: \"ea38f57e-a5bf-47fc-b9c0-d287bd1036f4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h"
	Oct 25 10:22:38 embed-certs-683681 kubelet[724]: I1025 10:22:38.746686     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblh9\" (UniqueName: \"kubernetes.io/projected/104da91e-df0f-49a9-bf95-7fd18378292d-kube-api-access-lblh9\") pod \"kubernetes-dashboard-855c9754f9-b2cmv\" (UID: \"104da91e-df0f-49a9-bf95-7fd18378292d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2cmv"
	Oct 25 10:22:40 embed-certs-683681 kubelet[724]: I1025 10:22:40.644554     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:22:41 embed-certs-683681 kubelet[724]: I1025 10:22:41.344410     724 scope.go:117] "RemoveContainer" containerID="5f26e9553f776d73cefd45b0376452fecb859bf352bdb0b5a86e0f15ee46f871"
	Oct 25 10:22:42 embed-certs-683681 kubelet[724]: I1025 10:22:42.349164     724 scope.go:117] "RemoveContainer" containerID="5f26e9553f776d73cefd45b0376452fecb859bf352bdb0b5a86e0f15ee46f871"
	Oct 25 10:22:42 embed-certs-683681 kubelet[724]: I1025 10:22:42.349302     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:42 embed-certs-683681 kubelet[724]: E1025 10:22:42.349562     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:22:43 embed-certs-683681 kubelet[724]: I1025 10:22:43.353932     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:43 embed-certs-683681 kubelet[724]: E1025 10:22:43.354133     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:22:44 embed-certs-683681 kubelet[724]: I1025 10:22:44.356879     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:44 embed-certs-683681 kubelet[724]: E1025 10:22:44.357134     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:22:45 embed-certs-683681 kubelet[724]: I1025 10:22:45.372539     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2cmv" podStartSLOduration=1.686433971 podStartE2EDuration="7.372513668s" podCreationTimestamp="2025-10-25 10:22:38 +0000 UTC" firstStartedPulling="2025-10-25 10:22:38.988772835 +0000 UTC m=+6.784459906" lastFinishedPulling="2025-10-25 10:22:44.674852532 +0000 UTC m=+12.470539603" observedRunningTime="2025-10-25 10:22:45.372092175 +0000 UTC m=+13.167779289" watchObservedRunningTime="2025-10-25 10:22:45.372513668 +0000 UTC m=+13.168200761"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: I1025 10:22:56.299295     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: I1025 10:22:56.391518     724 scope.go:117] "RemoveContainer" containerID="75c01b005dd97b961c9786f77b020835ade291436872b098b1c7554c1a8f92b4"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: I1025 10:22:56.391779     724 scope.go:117] "RemoveContainer" containerID="488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	Oct 25 10:22:56 embed-certs-683681 kubelet[724]: E1025 10:22:56.392055     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:23:02 embed-certs-683681 kubelet[724]: I1025 10:23:02.490379     724 scope.go:117] "RemoveContainer" containerID="488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	Oct 25 10:23:02 embed-certs-683681 kubelet[724]: E1025 10:23:02.490639     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:23:15 embed-certs-683681 kubelet[724]: I1025 10:23:15.296968     724 scope.go:117] "RemoveContainer" containerID="488d8e2589cf8b78062821187d7cc8a70dc6b22b21a81dd249dbd3f24f1fdf7f"
	Oct 25 10:23:15 embed-certs-683681 kubelet[724]: E1025 10:23:15.297178     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7tq6h_kubernetes-dashboard(ea38f57e-a5bf-47fc-b9c0-d287bd1036f4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7tq6h" podUID="ea38f57e-a5bf-47fc-b9c0-d287bd1036f4"
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 10:23:24 embed-certs-683681 systemd[1]: kubelet.service: Consumed 1.827s CPU time.
	
	
	==> kubernetes-dashboard [ae91c0eace8a71b1845d97507f08b3cce89463dc558fab2ed073d1b251d048a2] <==
	2025/10/25 10:22:44 Using namespace: kubernetes-dashboard
	2025/10/25 10:22:44 Using in-cluster config to connect to apiserver
	2025/10/25 10:22:44 Using secret token for csrf signing
	2025/10/25 10:22:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:22:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:22:44 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:22:44 Generating JWE encryption key
	2025/10/25 10:22:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:22:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:22:44 Initializing JWE encryption key from synchronized object
	2025/10/25 10:22:44 Creating in-cluster Sidecar client
	2025/10/25 10:22:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:22:44 Serving insecurely on HTTP port: 9090
	2025/10/25 10:23:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:22:44 Starting overwatch
	
	
	==> storage-provisioner [5a596c77f5df556e869709b8cf5dcb9c78dc06441ded8c2f7831e35736644375] <==
	I1025 10:22:35.674811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:22:35.678932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [8cadb44f0328e5bdc8a75a15aa015760ee35f78df670f55e688fcc7b1659aeef] <==
	W1025 10:23:03.837778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:05.841245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:05.845469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:07.849795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:07.854436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:09.858341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:09.864218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:11.867927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:11.872860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:13.876565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:13.882465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:15.885810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:15.892041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:17.896543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:17.901182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:19.905299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:19.913671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:21.917360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:21.921866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:23.925519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:23.930269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:25.934510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:25.939738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:27.943699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:23:27.948087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-683681 -n embed-certs-683681
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-683681 -n embed-certs-683681: exit status 2 (348.791368ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-683681 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.84s)

                                                
                                    

Test pass (262/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.25
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 4.13
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
20 TestDownloadOnlyKic 0.45
21 TestBinaryMirror 0.89
22 TestOffline 60.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 150.85
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
48 TestAddons/StoppedEnableDisable 18.62
49 TestCertOptions 24.28
50 TestCertExpiration 216.5
52 TestForceSystemdFlag 24.91
53 TestForceSystemdEnv 31.22
58 TestErrorSpam/setup 20.27
59 TestErrorSpam/start 0.73
60 TestErrorSpam/status 1.01
61 TestErrorSpam/pause 7.2
62 TestErrorSpam/unpause 5.85
63 TestErrorSpam/stop 12.59
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.48
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.74
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
75 TestFunctional/serial/CacheCmd/cache/add_local 1.75
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 76.95
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.35
87 TestFunctional/serial/InvalidService 4.67
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 6.67
91 TestFunctional/parallel/DryRun 0.52
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.25
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 26.4
101 TestFunctional/parallel/SSHCmd 0.68
102 TestFunctional/parallel/CpCmd 2.03
103 TestFunctional/parallel/MySQL 15.85
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.81
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 0.47
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
116 TestFunctional/parallel/ProfileCmd/profile_list 0.48
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.19
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/MountCmd/any-port 7.03
130 TestFunctional/parallel/MountCmd/specific-port 1.68
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
132 TestFunctional/parallel/Version/short 0.12
133 TestFunctional/parallel/Version/components 0.67
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
138 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
139 TestFunctional/parallel/ImageCommands/Setup 1.52
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
150 TestFunctional/parallel/ServiceCmd/List 1.72
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.72
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 119.18
163 TestMultiControlPlane/serial/DeployApp 5.73
164 TestMultiControlPlane/serial/PingHostFromPods 1.13
165 TestMultiControlPlane/serial/AddWorkerNode 24.7
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
168 TestMultiControlPlane/serial/CopyFile 18.36
169 TestMultiControlPlane/serial/StopSecondaryNode 18.93
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.67
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.93
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.75
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
176 TestMultiControlPlane/serial/StopCluster 41.68
177 TestMultiControlPlane/serial/RestartCluster 53.98
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 36.98
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
184 TestJSONOutput/start/Command 40.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.66
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 27.92
210 TestKicCustomNetwork/use_default_bridge_network 24.25
211 TestKicExistingNetwork 26.2
212 TestKicCustomSubnet 28.98
213 TestKicStaticIP 26.55
214 TestMainNoArgs 0.07
215 TestMinikubeProfile 52.38
218 TestMountStart/serial/StartWithMountFirst 5.89
219 TestMountStart/serial/VerifyMountFirst 0.29
220 TestMountStart/serial/StartWithMountSecond 5.82
221 TestMountStart/serial/VerifyMountSecond 0.29
222 TestMountStart/serial/DeleteFirst 1.75
223 TestMountStart/serial/VerifyMountPostDelete 0.29
224 TestMountStart/serial/Stop 1.28
225 TestMountStart/serial/RestartStopped 7.68
226 TestMountStart/serial/VerifyMountPostStop 0.29
229 TestMultiNode/serial/FreshStart2Nodes 62.61
230 TestMultiNode/serial/DeployApp2Nodes 4.59
231 TestMultiNode/serial/PingHostFrom2Pods 0.77
232 TestMultiNode/serial/AddNode 23.12
233 TestMultiNode/serial/MultiNodeLabels 0.07
234 TestMultiNode/serial/ProfileList 0.7
235 TestMultiNode/serial/CopyFile 10.55
236 TestMultiNode/serial/StopNode 2.37
237 TestMultiNode/serial/StartAfterStop 7.88
238 TestMultiNode/serial/RestartKeepsNodes 82.31
239 TestMultiNode/serial/DeleteNode 5.39
240 TestMultiNode/serial/StopMultiNode 28.81
241 TestMultiNode/serial/RestartMultiNode 47.2
242 TestMultiNode/serial/ValidateNameConflict 24.41
247 TestPreload 111.56
249 TestScheduledStopUnix 102.05
252 TestInsufficientStorage 10.12
253 TestRunningBinaryUpgrade 49.76
255 TestKubernetesUpgrade 309.16
256 TestMissingContainerUpgrade 100.72
258 TestPause/serial/Start 56.6
259 TestStoppedBinaryUpgrade/Setup 0.54
260 TestStoppedBinaryUpgrade/Upgrade 69.17
261 TestPause/serial/SecondStartNoReconfiguration 6.14
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
266 TestNoKubernetes/serial/StartWithK8s 34.07
274 TestNetworkPlugins/group/false 5.96
275 TestNoKubernetes/serial/StartWithStopK8s 18.65
279 TestNoKubernetes/serial/Start 5.3
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
281 TestNoKubernetes/serial/ProfileList 19.48
282 TestNoKubernetes/serial/Stop 1.3
283 TestNoKubernetes/serial/StartNoArgs 6.87
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
292 TestNetworkPlugins/group/auto/Start 41.09
293 TestNetworkPlugins/group/kindnet/Start 38.09
294 TestNetworkPlugins/group/auto/KubeletFlags 0.31
295 TestNetworkPlugins/group/auto/NetCatPod 9.24
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
298 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
299 TestNetworkPlugins/group/auto/DNS 0.13
300 TestNetworkPlugins/group/auto/Localhost 0.11
301 TestNetworkPlugins/group/auto/HairPin 0.12
302 TestNetworkPlugins/group/kindnet/DNS 0.12
303 TestNetworkPlugins/group/kindnet/Localhost 0.09
304 TestNetworkPlugins/group/kindnet/HairPin 0.11
305 TestNetworkPlugins/group/calico/Start 53.58
306 TestNetworkPlugins/group/custom-flannel/Start 48.18
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
310 TestNetworkPlugins/group/calico/KubeletFlags 0.34
311 TestNetworkPlugins/group/calico/NetCatPod 9.21
312 TestNetworkPlugins/group/enable-default-cni/Start 41.74
313 TestNetworkPlugins/group/custom-flannel/DNS 0.14
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
316 TestNetworkPlugins/group/calico/DNS 0.12
317 TestNetworkPlugins/group/calico/Localhost 0.09
318 TestNetworkPlugins/group/calico/HairPin 0.1
319 TestNetworkPlugins/group/flannel/Start 54.33
320 TestNetworkPlugins/group/bridge/Start 39.29
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
324 TestStartStop/group/old-k8s-version/serial/FirstStart 53.86
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
329 TestNetworkPlugins/group/bridge/NetCatPod 9.22
331 TestStartStop/group/no-preload/serial/FirstStart 58.53
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/bridge/DNS 0.12
334 TestNetworkPlugins/group/bridge/Localhost 0.1
335 TestNetworkPlugins/group/bridge/HairPin 0.1
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
337 TestNetworkPlugins/group/flannel/NetCatPod 10.23
338 TestNetworkPlugins/group/flannel/DNS 0.14
339 TestNetworkPlugins/group/flannel/Localhost 0.13
340 TestNetworkPlugins/group/flannel/HairPin 0.11
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.19
343 TestStartStop/group/old-k8s-version/serial/DeployApp 11.33
345 TestStartStop/group/old-k8s-version/serial/Stop 17.48
347 TestStartStop/group/newest-cni/serial/FirstStart 28.14
348 TestStartStop/group/no-preload/serial/DeployApp 9.27
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
350 TestStartStop/group/old-k8s-version/serial/SecondStart 53.04
352 TestStartStop/group/no-preload/serial/Stop 16.66
353 TestStartStop/group/newest-cni/serial/DeployApp 0
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
356 TestStartStop/group/newest-cni/serial/Stop 8.12
358 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
359 TestStartStop/group/newest-cni/serial/SecondStart 13.13
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.5
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
362 TestStartStop/group/no-preload/serial/SecondStart 52.37
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.67
370 TestStartStop/group/embed-certs/serial/FirstStart 45.64
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/embed-certs/serial/DeployApp 9.25
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
385 TestStartStop/group/embed-certs/serial/Stop 18.13
386 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
387 TestStartStop/group/embed-certs/serial/SecondStart 46.77
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (5.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-278458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-278458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.444051349s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 09:32:11.399848  325455 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 09:32:11.399985  325455 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-278458
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-278458: exit status 85 (83.174701ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-278458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-278458 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:06
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:06.011253  325466 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:06.011434  325466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:06.011446  325466 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:06.011451  325466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:06.011650  325466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	W1025 09:32:06.011808  325466 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-321838/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-321838/.minikube/config/config.json: no such file or directory
	I1025 09:32:06.012357  325466 out.go:368] Setting JSON to true
	I1025 09:32:06.013358  325466 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4475,"bootTime":1761380251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:32:06.013469  325466 start.go:141] virtualization: kvm guest
	I1025 09:32:06.015722  325466 out.go:99] [download-only-278458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1025 09:32:06.015888  325466 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 09:32:06.015917  325466 notify.go:220] Checking for updates...
	I1025 09:32:06.017614  325466 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:32:06.019185  325466 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:06.020573  325466 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:32:06.022160  325466 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:32:06.023668  325466 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 09:32:06.026057  325466 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:32:06.026333  325466 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:06.050263  325466 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:32:06.050395  325466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:06.116634  325466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:06.10511202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:06.116756  325466 docker.go:318] overlay module found
	I1025 09:32:06.118652  325466 out.go:99] Using the docker driver based on user configuration
	I1025 09:32:06.118688  325466 start.go:305] selected driver: docker
	I1025 09:32:06.118697  325466 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:06.118800  325466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:06.182329  325466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:06.172103325 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:06.182544  325466 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:06.183041  325466 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 09:32:06.183845  325466 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:32:06.185408  325466 out.go:171] Using Docker driver with root privileges
	I1025 09:32:06.186514  325466 cni.go:84] Creating CNI manager for ""
	I1025 09:32:06.186582  325466 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:06.186599  325466 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:06.186724  325466 start.go:349] cluster config:
	{Name:download-only-278458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-278458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:06.188140  325466 out.go:99] Starting "download-only-278458" primary control-plane node in "download-only-278458" cluster
	I1025 09:32:06.188173  325466 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:06.189619  325466 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:06.189658  325466 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:32:06.189808  325466 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:06.207984  325466 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:06.208206  325466 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:32:06.208308  325466 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:06.214593  325466 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:32:06.214630  325466 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:06.214788  325466 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:32:06.216669  325466 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 09:32:06.216702  325466 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 09:32:06.238598  325466 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1025 09:32:06.238725  325466 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:32:09.454896  325466 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 09:32:09.455271  325466 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/download-only-278458/config.json ...
	I1025 09:32:09.455312  325466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/download-only-278458/config.json: {Name:mk83a11bb89d594261c61e2e25a7b2828f196b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:09.455532  325466 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:32:09.456431  325466 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21767-321838/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-278458 host does not exist
	  To start a cluster, run: "minikube start -p download-only-278458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-278458
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-731105 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-731105 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.13236554s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 09:32:16.022628  325455 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 09:32:16.022683  325455 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-321838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-731105
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-731105: exit status 85 (78.050621ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-278458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-278458 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-278458                                                                                                                                                   │ download-only-278458 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -o=json --download-only -p download-only-731105 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-731105 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:11.945454  325814 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:11.945754  325814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:11.945767  325814 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:11.945771  325814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:11.945969  325814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:32:11.946471  325814 out.go:368] Setting JSON to true
	I1025 09:32:11.947429  325814 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4481,"bootTime":1761380251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:32:11.947537  325814 start.go:141] virtualization: kvm guest
	I1025 09:32:11.949636  325814 out.go:99] [download-only-731105] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:32:11.949832  325814 notify.go:220] Checking for updates...
	I1025 09:32:11.951095  325814 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:32:11.952537  325814 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:11.954417  325814 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:32:11.955939  325814 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:32:11.957347  325814 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 09:32:11.959806  325814 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:32:11.960122  325814 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:11.985166  325814 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:32:11.985277  325814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:12.047009  325814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:12.036827808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:12.047123  325814 docker.go:318] overlay module found
	I1025 09:32:12.048965  325814 out.go:99] Using the docker driver based on user configuration
	I1025 09:32:12.049001  325814 start.go:305] selected driver: docker
	I1025 09:32:12.049008  325814 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:12.049099  325814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:12.107400  325814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-25 09:32:12.097247687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:32:12.107565  325814 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:12.107990  325814 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 09:32:12.108113  325814 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:32:12.110029  325814 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-731105 host does not exist
	  To start a cluster, run: "minikube start -p download-only-731105"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-731105
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-053726 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-053726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-053726
--- PASS: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestBinaryMirror (0.89s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 09:32:17.265789  325455 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-351445 --alsologtostderr --binary-mirror http://127.0.0.1:36611 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-351445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-351445
--- PASS: TestBinaryMirror (0.89s)

                                                
                                    
x
+
TestOffline (60.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-169271 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-169271 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (58.22727961s)
helpers_test.go:175: Cleaning up "offline-crio-169271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-169271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-169271: (2.635558935s)
--- PASS: TestOffline (60.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-582494
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-582494: exit status 85 (69.854825ms)

                                                
                                                
-- stdout --
	* Profile "addons-582494" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-582494"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-582494
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-582494: exit status 85 (69.745432ms)

                                                
                                                
-- stdout --
	* Profile "addons-582494" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-582494"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (150.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-582494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-582494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m30.853910325s)
--- PASS: TestAddons/Setup (150.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-582494 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-582494 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-582494 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-582494 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7e2bff66-1ded-4b19-8d85-5456f9db38f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7e2bff66-1ded-4b19-8d85-5456f9db38f3] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003742059s
addons_test.go:694: (dbg) Run:  kubectl --context addons-582494 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-582494 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-582494 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-582494
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-582494: (18.313300215s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-582494
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-582494
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-582494
--- PASS: TestAddons/StoppedEnableDisable (18.62s)

                                                
                                    
x
+
TestCertOptions (24.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-207753 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-207753 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.861714856s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-207753 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-207753 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-207753 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-207753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-207753
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-207753: (2.664102435s)
--- PASS: TestCertOptions (24.28s)

                                                
                                    
x
+
TestCertExpiration (216.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-160366 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-160366 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.047474077s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-160366 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-160366 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.216777781s)
helpers_test.go:175: Cleaning up "cert-expiration-160366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-160366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-160366: (3.232298838s)
--- PASS: TestCertExpiration (216.50s)

                                                
                                    
x
+
TestForceSystemdFlag (24.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-107402 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1025 10:14:49.730746  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-107402 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.133237543s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-107402 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-107402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-107402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-107402: (2.480111379s)
--- PASS: TestForceSystemdFlag (24.91s)

                                                
                                    
x
+
TestForceSystemdEnv (31.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-690950 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-690950 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.330061796s)
helpers_test.go:175: Cleaning up "force-systemd-env-690950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-690950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-690950: (2.894169888s)
--- PASS: TestForceSystemdEnv (31.22s)

                                                
                                    
x
+
TestErrorSpam/setup (20.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-814328 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-814328 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-814328 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-814328 --driver=docker  --container-runtime=crio: (20.27025292s)
--- PASS: TestErrorSpam/setup (20.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (7.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause: exit status 80 (2.465326166s)

                                                
                                                
-- stdout --
	* Pausing node nospam-814328 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause: exit status 80 (2.299763609s)

                                                
                                                
-- stdout --
	* Pausing node nospam-814328 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause: exit status 80 (2.436606111s)

                                                
                                                
-- stdout --
	* Pausing node nospam-814328 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause: exit status 80 (2.280796749s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-814328 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause: exit status 80 (2.071459319s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-814328 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause: exit status 80 (1.495598875s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-814328 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.85s)

                                                
                                    
x
+
TestErrorSpam/stop (12.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 stop: (12.366969045s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814328 --log_dir /tmp/nospam-814328 stop
--- PASS: TestErrorSpam/stop (12.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-321838/.minikube/files/etc/test/nested/copy/325455/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558764 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-558764 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.479764116s)
--- PASS: TestFunctional/serial/StartWithProxy (38.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:39:36.040428  325455 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558764 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-558764 --alsologtostderr -v=8: (6.741901933s)
functional_test.go:678: soft start took 6.742803574s for "functional-558764" cluster.
I1025 09:39:42.782836  325455 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-558764 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 cache add registry.k8s.io/pause:3.1: (1.068230322s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 cache add registry.k8s.io/pause:3.3: (1.090578426s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558764 /tmp/TestFunctionalserialCacheCmdcacheadd_local389482049/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cache add minikube-local-cache-test:functional-558764
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 cache add minikube-local-cache-test:functional-558764: (1.383711317s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cache delete minikube-local-cache-test:functional-558764
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558764
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.035908ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1025 09:39:49.730936  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:49.737407  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:49.748878  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:49.770438  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:49.811925  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:49.893411  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
E1025 09:39:50.055494  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 kubectl -- --context functional-558764 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-558764 get pods
E1025 09:39:50.377246  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (76.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558764 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 09:39:51.019118  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:52.301262  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:54.863590  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:59.984910  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:10.226643  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:30.708623  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-558764 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m16.954206818s)
functional_test.go:776: restart took 1m16.954342488s for "functional-558764" cluster.
I1025 09:41:07.359459  325455 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (76.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-558764 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 logs: (1.337982333s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 logs --file /tmp/TestFunctionalserialLogsFileCmd2697666226/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 logs --file /tmp/TestFunctionalserialLogsFileCmd2697666226/001/logs.txt: (1.348419099s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-558764 apply -f testdata/invalidsvc.yaml
E1025 09:41:11.670663  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-558764
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-558764: exit status 115 (370.514226ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30683 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-558764 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-558764 delete -f testdata/invalidsvc.yaml: (1.120272021s)
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 config get cpus: exit status 14 (104.613242ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 config get cpus: exit status 14 (78.995121ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558764 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558764 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 359649: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (223.368762ms)

                                                
                                                
-- stdout --
	* [functional-558764] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:41:15.585729  358303 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:41:15.585867  358303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:15.585879  358303 out.go:374] Setting ErrFile to fd 2...
	I1025 09:41:15.585886  358303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:15.586200  358303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:41:15.586826  358303 out.go:368] Setting JSON to false
	I1025 09:41:15.588018  358303 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5025,"bootTime":1761380251,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:41:15.588184  358303 start.go:141] virtualization: kvm guest
	I1025 09:41:15.590347  358303 out.go:179] * [functional-558764] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:41:15.593640  358303 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:41:15.593856  358303 notify.go:220] Checking for updates...
	I1025 09:41:15.596334  358303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:41:15.600567  358303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:41:15.602045  358303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:41:15.603630  358303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:41:15.605237  358303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:41:15.607140  358303 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:15.607905  358303 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:41:15.639944  358303 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:41:15.640059  358303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:41:15.723759  358303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-25 09:41:15.709957361 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:41:15.723908  358303 docker.go:318] overlay module found
	I1025 09:41:15.725888  358303 out.go:179] * Using the docker driver based on existing profile
	I1025 09:41:15.727285  358303 start.go:305] selected driver: docker
	I1025 09:41:15.727306  358303 start.go:925] validating driver "docker" against &{Name:functional-558764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:15.727446  358303 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:41:15.729550  358303 out.go:203] 
	W1025 09:41:15.731002  358303 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 09:41:15.732377  358303 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558764 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.758597ms)

                                                
                                                
-- stdout --
	* [functional-558764] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:41:15.377883  358122 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:41:15.378160  358122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:15.378170  358122 out.go:374] Setting ErrFile to fd 2...
	I1025 09:41:15.378173  358122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:15.378545  358122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:41:15.379020  358122 out.go:368] Setting JSON to false
	I1025 09:41:15.380022  358122 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5024,"bootTime":1761380251,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:41:15.380125  358122 start.go:141] virtualization: kvm guest
	I1025 09:41:15.382454  358122 out.go:179] * [functional-558764] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:41:15.384055  358122 notify.go:220] Checking for updates...
	I1025 09:41:15.384109  358122 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:41:15.386342  358122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:41:15.387876  358122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 09:41:15.389401  358122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 09:41:15.390891  358122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:41:15.392452  358122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:41:15.394303  358122 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:15.394944  358122 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:41:15.421937  358122 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:41:15.422130  358122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:41:15.494159  358122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-25 09:41:15.481464648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:41:15.494311  358122 docker.go:318] overlay module found
	I1025 09:41:15.496336  358122 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:41:15.497540  358122 start.go:305] selected driver: docker
	I1025 09:41:15.497559  358122 start.go:925] validating driver "docker" against &{Name:functional-558764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:15.497691  358122 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:41:15.499727  358122 out.go:203] 
	W1025 09:41:15.501179  358122 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:41:15.504444  358122 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [37910166-127b-4faf-a814-baf4b226c5eb] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00425444s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-558764 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-558764 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-558764 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-558764 apply -f testdata/storage-provisioner/pod.yaml
I1025 09:41:23.095156  325455 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8a7faaab-c3ba-4ca6-bdc8-db65f6252743] Pending
helpers_test.go:352: "sp-pod" [8a7faaab-c3ba-4ca6-bdc8-db65f6252743] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8a7faaab-c3ba-4ca6-bdc8-db65f6252743] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004838903s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558764 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-558764 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-558764 apply -f testdata/storage-provisioner/pod.yaml
I1025 09:41:35.991871  325455 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0c4ea9b1-3cb1-4f93-8f97-27028f845664] Pending
helpers_test.go:352: "sp-pod" [0c4ea9b1-3cb1-4f93-8f97-27028f845664] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0c4ea9b1-3cb1-4f93-8f97-27028f845664] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004012412s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-558764 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh -n functional-558764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cp functional-558764:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3656229251/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh -n functional-558764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh -n functional-558764 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-558764 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-slf9s" [ef000b59-9378-425e-91f4-7eceae783a45] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-slf9s" [ef000b59-9378-425e-91f4-7eceae783a45] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003906591s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-558764 exec mysql-5bb876957f-slf9s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-558764 exec mysql-5bb876957f-slf9s -- mysql -ppassword -e "show databases;": exit status 1 (91.694028ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 09:42:00.522105  325455 retry.go:31] will retry after 1.484571831s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-558764 exec mysql-5bb876957f-slf9s -- mysql -ppassword -e "show databases;"
E1025 09:42:33.592407  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:44:49.731562  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:45:17.434773  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:49:49.731524  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (15.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/325455/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /etc/test/nested/copy/325455/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/325455.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /etc/ssl/certs/325455.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/325455.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /usr/share/ca-certificates/325455.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3254552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /etc/ssl/certs/3254552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3254552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /usr/share/ca-certificates/3254552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-558764 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh "sudo systemctl is-active docker": exit status 1 (294.845694ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh "sudo systemctl is-active containerd": exit status 1 (291.365814ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "403.396445ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "75.394147ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "437.374388ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "87.331087ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558764 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558764 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-558764 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-558764 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 359860: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558764 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-558764 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d80c94e3-7750-408d-9494-264c15502d84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
2025/10/25 09:41:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "nginx-svc" [d80c94e3-7750-408d-9494-264c15502d84] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.00380654s
I1025 09:41:30.162424  325455 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-558764 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.41.124 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-558764 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdany-port2662449417/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761385290331613822" to /tmp/TestFunctionalparallelMountCmdany-port2662449417/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761385290331613822" to /tmp/TestFunctionalparallelMountCmdany-port2662449417/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761385290331613822" to /tmp/TestFunctionalparallelMountCmdany-port2662449417/001/test-1761385290331613822
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (302.117029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:41:30.634013  325455 retry.go:31] will retry after 654.219261ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 09:41 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 09:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 09:41 test-1761385290331613822
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh cat /mount-9p/test-1761385290331613822
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558764 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [eb95ec89-cb5c-47df-ab04-260927b3b2c9] Pending
helpers_test.go:352: "busybox-mount" [eb95ec89-cb5c-47df-ab04-260927b3b2c9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [eb95ec89-cb5c-47df-ab04-260927b3b2c9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [eb95ec89-cb5c-47df-ab04-260927b3b2c9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004072494s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558764 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdany-port2662449417/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdspecific-port1728179039/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.75639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:41:37.661379  325455 retry.go:31] will retry after 303.338464ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdspecific-port1728179039/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh "sudo umount -f /mount-9p": exit status 1 (290.41595ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-558764 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdspecific-port1728179039/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3300286193/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3300286193/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3300286193/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T" /mount1: exit status 1 (370.216122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:41:39.412129  325455 retry.go:31] will retry after 651.962089ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-558764 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3300286193/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3300286193/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3300286193/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558764 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/my-image                      │ functional-558764  │ 4fc31222c80b5 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558764 image ls --format table --alsologtostderr:
I1025 09:41:56.731067  366577 out.go:360] Setting OutFile to fd 1 ...
I1025 09:41:56.731310  366577 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:56.731336  366577 out.go:374] Setting ErrFile to fd 2...
I1025 09:41:56.731343  366577 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:56.731566  366577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
I1025 09:41:56.732175  366577 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:56.732264  366577 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:56.732695  366577 cli_runner.go:164] Run: docker container inspect functional-558764 --format={{.State.Status}}
I1025 09:41:56.751534  366577 ssh_runner.go:195] Run: systemctl --version
I1025 09:41:56.751592  366577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558764
I1025 09:41:56.769157  366577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/functional-558764/id_rsa Username:docker}
I1025 09:41:56.871082  366577 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558764 image ls --format json --alsologtostderr:
[{"id":"4fc31222c80b5878d26e130d144e22d5500904aa7842bc6df5c2b510bef36f3e","repoDigests":["localhost/my-image@sha256:3f0adaf2f70a8eeac3ca0d2a183f8c934d7bee95a016b0bebf6d052df4e26f99"],"repoTags":["localhost/my-image:functional-558764"],"size":"1468743"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933
a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"
},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"011f73125787f31f45b40ca69baa6e89714d3096d45c1fa358008183ce1d1e10","repoDigests":["docker.io/library/191f34a250f4b2ebe49edfaf7d62ada690fe680be8e1dd7bdcf5e8deef28c8cd-tmp@sha256:18e2e4a9a6481d93276cd60d141b262bf4373065ea70fddcf159c431fa9ea13a"],"repoTags":[],"size":"1466132"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5e7abcdd20216
bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"s
ize":"73138073"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f3560
92ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","re
gistry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558764 image ls --format json --alsologtostderr:
I1025 09:41:56.472697  366511 out.go:360] Setting OutFile to fd 1 ...
I1025 09:41:56.472997  366511 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:56.473008  366511 out.go:374] Setting ErrFile to fd 2...
I1025 09:41:56.473012  366511 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:56.473255  366511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
I1025 09:41:56.473946  366511 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:56.474081  366511 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:56.474539  366511 cli_runner.go:164] Run: docker container inspect functional-558764 --format={{.State.Status}}
I1025 09:41:56.494149  366511 ssh_runner.go:195] Run: systemctl --version
I1025 09:41:56.494206  366511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558764
I1025 09:41:56.512922  366511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/functional-558764/id_rsa Username:docker}
I1025 09:41:56.617741  366511 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558764 image ls --format yaml --alsologtostderr:
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 011f73125787f31f45b40ca69baa6e89714d3096d45c1fa358008183ce1d1e10
repoDigests:
- docker.io/library/191f34a250f4b2ebe49edfaf7d62ada690fe680be8e1dd7bdcf5e8deef28c8cd-tmp@sha256:18e2e4a9a6481d93276cd60d141b262bf4373065ea70fddcf159c431fa9ea13a
repoTags: []
size: "1466132"
- id: 4fc31222c80b5878d26e130d144e22d5500904aa7842bc6df5c2b510bef36f3e
repoDigests:
- localhost/my-image@sha256:3f0adaf2f70a8eeac3ca0d2a183f8c934d7bee95a016b0bebf6d052df4e26f99
repoTags:
- localhost/my-image:functional-558764
size: "1468743"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558764 image ls --format yaml --alsologtostderr:
I1025 09:41:56.969784  366644 out.go:360] Setting OutFile to fd 1 ...
I1025 09:41:56.970033  366644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:56.970041  366644 out.go:374] Setting ErrFile to fd 2...
I1025 09:41:56.970045  366644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:56.970218  366644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
I1025 09:41:56.970833  366644 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:56.970927  366644 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:56.971304  366644 cli_runner.go:164] Run: docker container inspect functional-558764 --format={{.State.Status}}
I1025 09:41:56.991174  366644 ssh_runner.go:195] Run: systemctl --version
I1025 09:41:56.991221  366644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558764
I1025 09:41:57.010279  366644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/functional-558764/id_rsa Username:docker}
I1025 09:41:57.111293  366644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558764 ssh pgrep buildkitd: exit status 1 (354.071829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image build -t localhost/my-image:functional-558764 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 image build -t localhost/my-image:functional-558764 testdata/build --alsologtostderr: (3.347686302s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558764 image build -t localhost/my-image:functional-558764 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 011f7312578
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-558764
--> 4fc31222c80
Successfully tagged localhost/my-image:functional-558764
4fc31222c80b5878d26e130d144e22d5500904aa7842bc6df5c2b510bef36f3e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558764 image build -t localhost/my-image:functional-558764 testdata/build --alsologtostderr:
I1025 09:41:52.895132  365926 out.go:360] Setting OutFile to fd 1 ...
I1025 09:41:52.895304  365926 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:52.895313  365926 out.go:374] Setting ErrFile to fd 2...
I1025 09:41:52.895330  365926 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:41:52.896005  365926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
I1025 09:41:52.896984  365926 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:52.897929  365926 config.go:182] Loaded profile config "functional-558764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:41:52.898548  365926 cli_runner.go:164] Run: docker container inspect functional-558764 --format={{.State.Status}}
I1025 09:41:52.922991  365926 ssh_runner.go:195] Run: systemctl --version
I1025 09:41:52.923064  365926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558764
I1025 09:41:52.948544  365926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/functional-558764/id_rsa Username:docker}
I1025 09:41:53.064819  365926 build_images.go:161] Building image from path: /tmp/build.343202470.tar
I1025 09:41:53.064956  365926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 09:41:53.076887  365926 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.343202470.tar
I1025 09:41:53.082049  365926 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.343202470.tar: stat -c "%s %y" /var/lib/minikube/build/build.343202470.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.343202470.tar': No such file or directory
I1025 09:41:53.082092  365926 ssh_runner.go:362] scp /tmp/build.343202470.tar --> /var/lib/minikube/build/build.343202470.tar (3072 bytes)
I1025 09:41:53.109533  365926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.343202470
I1025 09:41:53.121032  365926 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.343202470 -xf /var/lib/minikube/build/build.343202470.tar
I1025 09:41:53.132550  365926 crio.go:315] Building image: /var/lib/minikube/build/build.343202470
I1025 09:41:53.132640  365926 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-558764 /var/lib/minikube/build/build.343202470 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 09:41:56.136309  365926 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-558764 /var/lib/minikube/build/build.343202470 --cgroup-manager=cgroupfs: (3.003639554s)
I1025 09:41:56.136429  365926 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.343202470
I1025 09:41:56.145754  365926 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.343202470.tar
I1025 09:41:56.154483  365926 build_images.go:217] Built localhost/my-image:functional-558764 from /tmp/build.343202470.tar
I1025 09:41:56.154515  365926 build_images.go:133] succeeded building to: functional-558764
I1025 09:41:56.154520  365926 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.498776122s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-558764
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image rm kicbase/echo-server:functional-558764 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 service list: (1.72320204s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-558764 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-558764 service list -o json: (1.723264526s)
functional_test.go:1504: Took "1.723371226s" to run "out/minikube-linux-amd64 -p functional-558764 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-558764
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-558764
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558764
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (119.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m58.383236852s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (119.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 kubectl -- rollout status deployment/busybox: (3.51903485s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-8289h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-gqv8z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-t8qwq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-8289h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-gqv8z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-t8qwq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-8289h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-gqv8z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-t8qwq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-8289h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-8289h -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-gqv8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-gqv8z -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-t8qwq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 kubectl -- exec busybox-7b57f96db7-t8qwq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 node add --alsologtostderr -v 5: (23.74263898s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-437238 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp testdata/cp-test.txt ha-437238:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1931507434/001/cp-test_ha-437238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238:/home/docker/cp-test.txt ha-437238-m02:/home/docker/cp-test_ha-437238_ha-437238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test_ha-437238_ha-437238-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238:/home/docker/cp-test.txt ha-437238-m03:/home/docker/cp-test_ha-437238_ha-437238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test_ha-437238_ha-437238-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238:/home/docker/cp-test.txt ha-437238-m04:/home/docker/cp-test_ha-437238_ha-437238-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test_ha-437238_ha-437238-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp testdata/cp-test.txt ha-437238-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1931507434/001/cp-test_ha-437238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m02:/home/docker/cp-test.txt ha-437238:/home/docker/cp-test_ha-437238-m02_ha-437238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test_ha-437238-m02_ha-437238.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m02:/home/docker/cp-test.txt ha-437238-m03:/home/docker/cp-test_ha-437238-m02_ha-437238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test_ha-437238-m02_ha-437238-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m02:/home/docker/cp-test.txt ha-437238-m04:/home/docker/cp-test_ha-437238-m02_ha-437238-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test_ha-437238-m02_ha-437238-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp testdata/cp-test.txt ha-437238-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1931507434/001/cp-test_ha-437238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m03:/home/docker/cp-test.txt ha-437238:/home/docker/cp-test_ha-437238-m03_ha-437238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test_ha-437238-m03_ha-437238.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m03:/home/docker/cp-test.txt ha-437238-m02:/home/docker/cp-test_ha-437238-m03_ha-437238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test_ha-437238-m03_ha-437238-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m03:/home/docker/cp-test.txt ha-437238-m04:/home/docker/cp-test_ha-437238-m03_ha-437238-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test_ha-437238-m03_ha-437238-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp testdata/cp-test.txt ha-437238-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1931507434/001/cp-test_ha-437238-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m04:/home/docker/cp-test.txt ha-437238:/home/docker/cp-test_ha-437238-m04_ha-437238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238 "sudo cat /home/docker/cp-test_ha-437238-m04_ha-437238.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m04:/home/docker/cp-test.txt ha-437238-m02:/home/docker/cp-test_ha-437238-m04_ha-437238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m02 "sudo cat /home/docker/cp-test_ha-437238-m04_ha-437238-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 cp ha-437238-m04:/home/docker/cp-test.txt ha-437238-m03:/home/docker/cp-test_ha-437238-m04_ha-437238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 ssh -n ha-437238-m03 "sudo cat /home/docker/cp-test_ha-437238-m04_ha-437238-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 node stop m02 --alsologtostderr -v 5: (18.18504023s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5: exit status 7 (745.429273ms)

                                                
                                                
-- stdout --
	ha-437238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-437238-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-437238-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-437238-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:54:37.600434  390416 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:37.600805  390416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:37.600884  390416 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:37.600897  390416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:37.601410  390416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:54:37.602029  390416 out.go:368] Setting JSON to false
	I1025 09:54:37.602081  390416 mustload.go:65] Loading cluster: ha-437238
	I1025 09:54:37.602178  390416 notify.go:220] Checking for updates...
	I1025 09:54:37.602547  390416 config.go:182] Loaded profile config "ha-437238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:37.602565  390416 status.go:174] checking status of ha-437238 ...
	I1025 09:54:37.603045  390416 cli_runner.go:164] Run: docker container inspect ha-437238 --format={{.State.Status}}
	I1025 09:54:37.622959  390416 status.go:371] ha-437238 host status = "Running" (err=<nil>)
	I1025 09:54:37.622986  390416 host.go:66] Checking if "ha-437238" exists ...
	I1025 09:54:37.623291  390416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-437238
	I1025 09:54:37.643110  390416 host.go:66] Checking if "ha-437238" exists ...
	I1025 09:54:37.643440  390416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:37.643503  390416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-437238
	I1025 09:54:37.664564  390416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/ha-437238/id_rsa Username:docker}
	I1025 09:54:37.768364  390416 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:37.776010  390416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:37.789698  390416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:37.851748  390416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:37.840700426 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:37.852496  390416 kubeconfig.go:125] found "ha-437238" server: "https://192.168.49.254:8443"
	I1025 09:54:37.852534  390416 api_server.go:166] Checking apiserver status ...
	I1025 09:54:37.852579  390416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:37.865925  390416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	W1025 09:54:37.875185  390416 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:37.875251  390416 ssh_runner.go:195] Run: ls
	I1025 09:54:37.879298  390416 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:54:37.883535  390416 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:54:37.883558  390416 status.go:463] ha-437238 apiserver status = Running (err=<nil>)
	I1025 09:54:37.883569  390416 status.go:176] ha-437238 status: &{Name:ha-437238 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:54:37.883595  390416 status.go:174] checking status of ha-437238-m02 ...
	I1025 09:54:37.883862  390416 cli_runner.go:164] Run: docker container inspect ha-437238-m02 --format={{.State.Status}}
	I1025 09:54:37.902851  390416 status.go:371] ha-437238-m02 host status = "Stopped" (err=<nil>)
	I1025 09:54:37.902878  390416 status.go:384] host is not running, skipping remaining checks
	I1025 09:54:37.902886  390416 status.go:176] ha-437238-m02 status: &{Name:ha-437238-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:54:37.902909  390416 status.go:174] checking status of ha-437238-m03 ...
	I1025 09:54:37.903163  390416 cli_runner.go:164] Run: docker container inspect ha-437238-m03 --format={{.State.Status}}
	I1025 09:54:37.921782  390416 status.go:371] ha-437238-m03 host status = "Running" (err=<nil>)
	I1025 09:54:37.921815  390416 host.go:66] Checking if "ha-437238-m03" exists ...
	I1025 09:54:37.922075  390416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-437238-m03
	I1025 09:54:37.940711  390416 host.go:66] Checking if "ha-437238-m03" exists ...
	I1025 09:54:37.940967  390416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:37.941004  390416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-437238-m03
	I1025 09:54:37.959900  390416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/ha-437238-m03/id_rsa Username:docker}
	I1025 09:54:38.060484  390416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:38.074212  390416 kubeconfig.go:125] found "ha-437238" server: "https://192.168.49.254:8443"
	I1025 09:54:38.074240  390416 api_server.go:166] Checking apiserver status ...
	I1025 09:54:38.074274  390416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:38.086569  390416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W1025 09:54:38.096156  390416 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:38.096238  390416 ssh_runner.go:195] Run: ls
	I1025 09:54:38.100520  390416 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:54:38.104842  390416 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:54:38.104872  390416 status.go:463] ha-437238-m03 apiserver status = Running (err=<nil>)
	I1025 09:54:38.104885  390416 status.go:176] ha-437238-m03 status: &{Name:ha-437238-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:54:38.104904  390416 status.go:174] checking status of ha-437238-m04 ...
	I1025 09:54:38.105214  390416 cli_runner.go:164] Run: docker container inspect ha-437238-m04 --format={{.State.Status}}
	I1025 09:54:38.125084  390416 status.go:371] ha-437238-m04 host status = "Running" (err=<nil>)
	I1025 09:54:38.125111  390416 host.go:66] Checking if "ha-437238-m04" exists ...
	I1025 09:54:38.125380  390416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-437238-m04
	I1025 09:54:38.143232  390416 host.go:66] Checking if "ha-437238-m04" exists ...
	I1025 09:54:38.143602  390416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:38.143657  390416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-437238-m04
	I1025 09:54:38.164046  390416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/ha-437238-m04/id_rsa Username:docker}
	I1025 09:54:38.265175  390416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:38.279121  390416 status.go:176] ha-437238-m04 status: &{Name:ha-437238-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node start m02 --alsologtostderr -v 5
E1025 09:54:49.731663  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 node start m02 --alsologtostderr -v 5: (13.633410767s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 stop --alsologtostderr -v 5: (51.199794204s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 start --wait true --alsologtostderr -v 5
E1025 09:56:12.796973  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:14.980633  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:14.987169  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:14.999378  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:15.020818  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:15.062219  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:15.143657  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:15.305343  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:15.626990  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:16.269034  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:17.550651  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:20.112413  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:25.234796  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:56:35.477168  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 start --wait true --alsologtostderr -v 5: (1m9.583865947s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node delete m03 --alsologtostderr -v 5
E1025 09:56:55.959544  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 node delete m03 --alsologtostderr -v 5: (9.885411509s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 stop --alsologtostderr -v 5
E1025 09:57:36.922686  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 stop --alsologtostderr -v 5: (41.55260454s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5: exit status 7 (129.449173ms)

                                                
                                                
-- stdout --
	ha-437238
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-437238-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-437238-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:57:48.738136  404635 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:57:48.738294  404635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:57:48.738308  404635 out.go:374] Setting ErrFile to fd 2...
	I1025 09:57:48.738313  404635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:57:48.738530  404635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 09:57:48.738714  404635 out.go:368] Setting JSON to false
	I1025 09:57:48.738749  404635 mustload.go:65] Loading cluster: ha-437238
	I1025 09:57:48.738846  404635 notify.go:220] Checking for updates...
	I1025 09:57:48.739152  404635 config.go:182] Loaded profile config "ha-437238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:57:48.739169  404635 status.go:174] checking status of ha-437238 ...
	I1025 09:57:48.739727  404635 cli_runner.go:164] Run: docker container inspect ha-437238 --format={{.State.Status}}
	I1025 09:57:48.763261  404635 status.go:371] ha-437238 host status = "Stopped" (err=<nil>)
	I1025 09:57:48.763295  404635 status.go:384] host is not running, skipping remaining checks
	I1025 09:57:48.763302  404635 status.go:176] ha-437238 status: &{Name:ha-437238 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:57:48.763340  404635 status.go:174] checking status of ha-437238-m02 ...
	I1025 09:57:48.763662  404635 cli_runner.go:164] Run: docker container inspect ha-437238-m02 --format={{.State.Status}}
	I1025 09:57:48.783029  404635 status.go:371] ha-437238-m02 host status = "Stopped" (err=<nil>)
	I1025 09:57:48.783080  404635 status.go:384] host is not running, skipping remaining checks
	I1025 09:57:48.783102  404635 status.go:176] ha-437238-m02 status: &{Name:ha-437238-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:57:48.783130  404635 status.go:174] checking status of ha-437238-m04 ...
	I1025 09:57:48.783435  404635 cli_runner.go:164] Run: docker container inspect ha-437238-m04 --format={{.State.Status}}
	I1025 09:57:48.801587  404635 status.go:371] ha-437238-m04 host status = "Stopped" (err=<nil>)
	I1025 09:57:48.801614  404635 status.go:384] host is not running, skipping remaining checks
	I1025 09:57:48.801623  404635 status.go:176] ha-437238-m04 status: &{Name:ha-437238-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.116875066s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 node add --control-plane --alsologtostderr -v 5
E1025 09:58:58.846112  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-437238 node add --control-plane --alsologtostderr -v 5: (36.010009s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-437238 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-513456 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1025 09:59:49.731303  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-513456 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.051302641s)
--- PASS: TestJSONOutput/start/Command (40.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-513456 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-513456 --output=json --user=testUser: (6.662702762s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-283610 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-283610 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.88119ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ec7bbca2-6d9d-40d0-8ef5-974694b5b675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-283610] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"58426554-020b-4255-9783-0becb21efaf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"03e91599-435e-41d5-be7a-ce5b773dee2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dfaa59d0-3884-473f-89c9-ce29150d0a8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig"}}
	{"specversion":"1.0","id":"72e75abe-fb5f-4d2d-b183-1dbc86b57ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube"}}
	{"specversion":"1.0","id":"07fc38bf-b2c4-4904-9f98-08dc4e216735","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d95c5056-c8ec-400c-a963-281581c43a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1fb40ec0-740d-4b85-aa2d-4dd4cd2b3b56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-283610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-283610
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-592300 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-592300 --network=: (25.682612417s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-592300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-592300
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-592300: (2.218595438s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-665214 --network=bridge
E1025 10:01:14.980266  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-665214 --network=bridge: (22.143453367s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-665214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-665214
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-665214: (2.081303081s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.25s)

                                                
                                    
x
+
TestKicExistingNetwork (26.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 10:01:18.830963  325455 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 10:01:18.848873  325455 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 10:01:18.848945  325455 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 10:01:18.848973  325455 cli_runner.go:164] Run: docker network inspect existing-network
W1025 10:01:18.868542  325455 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 10:01:18.868571  325455 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 10:01:18.868593  325455 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 10:01:18.868748  325455 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 10:01:18.886545  325455 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b7c770f4d6bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:31:17:4a:ca:3a} reservation:<nil>}
I1025 10:01:18.886987  325455 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b4f930}
I1025 10:01:18.887020  325455 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1025 10:01:18.887067  325455 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 10:01:18.948969  325455 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-990033 --network=existing-network
E1025 10:01:42.687660  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-990033 --network=existing-network: (23.989197648s)
helpers_test.go:175: Cleaning up "existing-network-990033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-990033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-990033: (2.056241624s)
I1025 10:01:45.014288  325455 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.20s)

                                                
                                    
x
+
TestKicCustomSubnet (28.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-238808 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-238808 --subnet=192.168.60.0/24: (26.703796906s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-238808 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-238808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-238808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-238808: (2.251511076s)
--- PASS: TestKicCustomSubnet (28.98s)

                                                
                                    
x
+
TestKicStaticIP (26.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-605474 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-605474 --static-ip=192.168.200.200: (24.130211195s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-605474 ip
helpers_test.go:175: Cleaning up "static-ip-605474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-605474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-605474: (2.26305232s)
--- PASS: TestKicStaticIP (26.55s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (52.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-529442 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-529442 --driver=docker  --container-runtime=crio: (23.417335909s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-532701 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-532701 --driver=docker  --container-runtime=crio: (22.611053119s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-529442
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-532701
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-532701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-532701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-532701: (2.523366563s)
helpers_test.go:175: Cleaning up "first-529442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-529442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-529442: (2.49849476s)
--- PASS: TestMinikubeProfile (52.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-908715 --memory=3072 --mount-string /tmp/TestMountStartserial2225082884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-908715 --memory=3072 --mount-string /tmp/TestMountStartserial2225082884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.888582721s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-908715 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-932125 --memory=3072 --mount-string /tmp/TestMountStartserial2225082884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-932125 --memory=3072 --mount-string /tmp/TestMountStartserial2225082884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.821327819s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-932125 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-908715 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-908715 --alsologtostderr -v=5: (1.746373365s)
--- PASS: TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-932125 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-932125
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-932125: (1.276487154s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-932125
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-932125: (6.681902326s)
--- PASS: TestMountStart/serial/RestartStopped (7.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-932125 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-927083 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 10:04:49.731864  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-927083 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.085636487s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-927083 -- rollout status deployment/busybox: (3.026444488s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-gp76z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-txnkt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-gp76z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-txnkt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-gp76z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-txnkt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-gp76z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-gp76z -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-txnkt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-927083 -- exec busybox-7b57f96db7-txnkt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-927083 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-927083 -v=5 --alsologtostderr: (22.420712664s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-927083 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp testdata/cp-test.txt multinode-927083:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1482725841/001/cp-test_multinode-927083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083:/home/docker/cp-test.txt multinode-927083-m02:/home/docker/cp-test_multinode-927083_multinode-927083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m02 "sudo cat /home/docker/cp-test_multinode-927083_multinode-927083-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083:/home/docker/cp-test.txt multinode-927083-m03:/home/docker/cp-test_multinode-927083_multinode-927083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m03 "sudo cat /home/docker/cp-test_multinode-927083_multinode-927083-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp testdata/cp-test.txt multinode-927083-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1482725841/001/cp-test_multinode-927083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083-m02:/home/docker/cp-test.txt multinode-927083:/home/docker/cp-test_multinode-927083-m02_multinode-927083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083 "sudo cat /home/docker/cp-test_multinode-927083-m02_multinode-927083.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083-m02:/home/docker/cp-test.txt multinode-927083-m03:/home/docker/cp-test_multinode-927083-m02_multinode-927083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m03 "sudo cat /home/docker/cp-test_multinode-927083-m02_multinode-927083-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp testdata/cp-test.txt multinode-927083-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1482725841/001/cp-test_multinode-927083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083-m03:/home/docker/cp-test.txt multinode-927083:/home/docker/cp-test_multinode-927083-m03_multinode-927083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083 "sudo cat /home/docker/cp-test_multinode-927083-m03_multinode-927083.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 cp multinode-927083-m03:/home/docker/cp-test.txt multinode-927083-m02:/home/docker/cp-test_multinode-927083-m03_multinode-927083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 ssh -n multinode-927083-m02 "sudo cat /home/docker/cp-test_multinode-927083-m03_multinode-927083-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-927083 node stop m03: (1.293779097s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-927083 status: exit status 7 (530.0057ms)

                                                
                                                
-- stdout --
	multinode-927083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-927083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-927083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr: exit status 7 (542.832798ms)

                                                
                                                
-- stdout --
	multinode-927083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-927083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-927083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:05:42.809583  464164 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:05:42.809840  464164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:05:42.809848  464164 out.go:374] Setting ErrFile to fd 2...
	I1025 10:05:42.809851  464164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:05:42.810053  464164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:05:42.810223  464164 out.go:368] Setting JSON to false
	I1025 10:05:42.810260  464164 mustload.go:65] Loading cluster: multinode-927083
	I1025 10:05:42.810441  464164 notify.go:220] Checking for updates...
	I1025 10:05:42.810703  464164 config.go:182] Loaded profile config "multinode-927083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:05:42.810726  464164 status.go:174] checking status of multinode-927083 ...
	I1025 10:05:42.811289  464164 cli_runner.go:164] Run: docker container inspect multinode-927083 --format={{.State.Status}}
	I1025 10:05:42.832513  464164 status.go:371] multinode-927083 host status = "Running" (err=<nil>)
	I1025 10:05:42.832546  464164 host.go:66] Checking if "multinode-927083" exists ...
	I1025 10:05:42.832888  464164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-927083
	I1025 10:05:42.852787  464164 host.go:66] Checking if "multinode-927083" exists ...
	I1025 10:05:42.853111  464164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:05:42.853175  464164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-927083
	I1025 10:05:42.873003  464164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/multinode-927083/id_rsa Username:docker}
	I1025 10:05:42.980517  464164 ssh_runner.go:195] Run: systemctl --version
	I1025 10:05:42.987186  464164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:05:43.000747  464164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:05:43.063298  464164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-25 10:05:43.052449617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:05:43.063884  464164 kubeconfig.go:125] found "multinode-927083" server: "https://192.168.67.2:8443"
	I1025 10:05:43.063918  464164 api_server.go:166] Checking apiserver status ...
	I1025 10:05:43.063952  464164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:05:43.076788  464164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	W1025 10:05:43.085995  464164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:05:43.086060  464164 ssh_runner.go:195] Run: ls
	I1025 10:05:43.090108  464164 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 10:05:43.095654  464164 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 10:05:43.095681  464164 status.go:463] multinode-927083 apiserver status = Running (err=<nil>)
	I1025 10:05:43.095693  464164 status.go:176] multinode-927083 status: &{Name:multinode-927083 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:05:43.095714  464164 status.go:174] checking status of multinode-927083-m02 ...
	I1025 10:05:43.095957  464164 cli_runner.go:164] Run: docker container inspect multinode-927083-m02 --format={{.State.Status}}
	I1025 10:05:43.115107  464164 status.go:371] multinode-927083-m02 host status = "Running" (err=<nil>)
	I1025 10:05:43.115134  464164 host.go:66] Checking if "multinode-927083-m02" exists ...
	I1025 10:05:43.115441  464164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-927083-m02
	I1025 10:05:43.134776  464164 host.go:66] Checking if "multinode-927083-m02" exists ...
	I1025 10:05:43.135095  464164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:05:43.135142  464164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-927083-m02
	I1025 10:05:43.153843  464164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21767-321838/.minikube/machines/multinode-927083-m02/id_rsa Username:docker}
	I1025 10:05:43.254378  464164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:05:43.268265  464164 status.go:176] multinode-927083-m02 status: &{Name:multinode-927083-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:05:43.268305  464164 status.go:174] checking status of multinode-927083-m03 ...
	I1025 10:05:43.268669  464164 cli_runner.go:164] Run: docker container inspect multinode-927083-m03 --format={{.State.Status}}
	I1025 10:05:43.288686  464164 status.go:371] multinode-927083-m03 host status = "Stopped" (err=<nil>)
	I1025 10:05:43.288714  464164 status.go:384] host is not running, skipping remaining checks
	I1025 10:05:43.288721  464164 status.go:176] multinode-927083-m03 status: &{Name:multinode-927083-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-927083 node start m03 -v=5 --alsologtostderr: (7.124061657s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-927083
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-927083
E1025 10:06:14.983944  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-927083: (31.508057035s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-927083 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-927083 --wait=true -v=5 --alsologtostderr: (50.661620357s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-927083
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-927083 node delete m03: (4.751063569s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-927083 stop: (28.598064111s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-927083 status: exit status 7 (108.203852ms)

                                                
                                                
-- stdout --
	multinode-927083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-927083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr: exit status 7 (104.604583ms)

                                                
                                                
-- stdout --
	multinode-927083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-927083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:07:47.634693  473858 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:07:47.634796  473858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:07:47.634800  473858 out.go:374] Setting ErrFile to fd 2...
	I1025 10:07:47.634804  473858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:07:47.635015  473858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:07:47.635171  473858 out.go:368] Setting JSON to false
	I1025 10:07:47.635204  473858 mustload.go:65] Loading cluster: multinode-927083
	I1025 10:07:47.635247  473858 notify.go:220] Checking for updates...
	I1025 10:07:47.635616  473858 config.go:182] Loaded profile config "multinode-927083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:07:47.635632  473858 status.go:174] checking status of multinode-927083 ...
	I1025 10:07:47.636045  473858 cli_runner.go:164] Run: docker container inspect multinode-927083 --format={{.State.Status}}
	I1025 10:07:47.657764  473858 status.go:371] multinode-927083 host status = "Stopped" (err=<nil>)
	I1025 10:07:47.657806  473858 status.go:384] host is not running, skipping remaining checks
	I1025 10:07:47.657813  473858 status.go:176] multinode-927083 status: &{Name:multinode-927083 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:07:47.657848  473858 status.go:174] checking status of multinode-927083-m02 ...
	I1025 10:07:47.658118  473858 cli_runner.go:164] Run: docker container inspect multinode-927083-m02 --format={{.State.Status}}
	I1025 10:07:47.677345  473858 status.go:371] multinode-927083-m02 host status = "Stopped" (err=<nil>)
	I1025 10:07:47.677377  473858 status.go:384] host is not running, skipping remaining checks
	I1025 10:07:47.677385  473858 status.go:176] multinode-927083-m02 status: &{Name:multinode-927083-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-927083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-927083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.567430228s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-927083 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-927083
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-927083-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-927083-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.290243ms)

                                                
                                                
-- stdout --
	* [multinode-927083-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-927083-m02' is duplicated with machine name 'multinode-927083-m02' in profile 'multinode-927083'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-927083-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-927083-m03 --driver=docker  --container-runtime=crio: (21.446558614s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-927083
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-927083: exit status 80 (305.599293ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-927083 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-927083-m03 already exists in multinode-927083-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-927083-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-927083-m03: (2.504024212s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.41s)

                                                
                                    
x
+
TestPreload (111.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-337821 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1025 10:09:49.731035  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-337821 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.186268869s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-337821 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-337821 image pull gcr.io/k8s-minikube/busybox: (2.32678046s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-337821
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-337821: (6.0317196s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-337821 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-337821 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.195757445s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-337821 image list
helpers_test.go:175: Cleaning up "test-preload-337821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-337821
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-337821: (2.571265419s)
--- PASS: TestPreload (111.56s)

                                                
                                    
x
+
TestScheduledStopUnix (102.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-514449 --memory=3072 --driver=docker  --container-runtime=crio
E1025 10:11:14.979721  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-514449 --memory=3072 --driver=docker  --container-runtime=crio: (25.552115673s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-514449 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-514449 -n scheduled-stop-514449
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-514449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 10:11:21.215516  325455 retry.go:31] will retry after 90.879µs: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.216666  325455 retry.go:31] will retry after 192.862µs: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.217841  325455 retry.go:31] will retry after 167.836µs: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.218964  325455 retry.go:31] will retry after 235.798µs: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.220125  325455 retry.go:31] will retry after 256.664µs: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.221264  325455 retry.go:31] will retry after 846.609µs: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.222467  325455 retry.go:31] will retry after 1.293732ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.224673  325455 retry.go:31] will retry after 2.04519ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.226823  325455 retry.go:31] will retry after 1.714936ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.229069  325455 retry.go:31] will retry after 4.469925ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.234570  325455 retry.go:31] will retry after 3.038637ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.237721  325455 retry.go:31] will retry after 12.050222ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.249942  325455 retry.go:31] will retry after 11.38487ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.262246  325455 retry.go:31] will retry after 13.230959ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.276528  325455 retry.go:31] will retry after 27.536608ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
I1025 10:11:21.304854  325455 retry.go:31] will retry after 53.879224ms: open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/scheduled-stop-514449/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-514449 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-514449 -n scheduled-stop-514449
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-514449
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-514449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-514449
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-514449: exit status 7 (84.69628ms)

                                                
                                                
-- stdout --
	scheduled-stop-514449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-514449 -n scheduled-stop-514449
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-514449 -n scheduled-stop-514449: exit status 7 (86.961954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-514449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-514449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-514449: (4.867559065s)
--- PASS: TestScheduledStopUnix (102.05s)

                                                
                                    
x
+
TestInsufficientStorage (10.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-591590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1025 10:12:38.049557  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-591590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.527576897s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c309dbe8-e940-476f-8c4a-976cbcb4e48a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-591590] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8a4c4f0-74a3-49de-bef0-9b6972790d72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"ee75f4c9-958f-4e3c-977c-269174f81e8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd6e2c72-b0a1-4a8d-b949-85a0e1b55e17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig"}}
	{"specversion":"1.0","id":"e5667c5b-7936-4fb8-b70f-34c3dab0d857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube"}}
	{"specversion":"1.0","id":"908039b4-eb69-487d-bd4a-8f6fe7e9c542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cc039bfb-21e0-40f1-8a1e-581e713e9c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b90b7f0c-74ff-4617-8d04-98e61d240726","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"406f466c-c33e-4c53-a948-cdb17202bbb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a98a16ad-92a9-40e8-9b0c-4717358919cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddc691e9-5c2a-4c31-9c14-ee79ddbf776b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"892ead4d-989c-4007-b314-61515f380e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-591590\" primary control-plane node in \"insufficient-storage-591590\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"15eeccca-3dd8-4415-8217-627d2274e1d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b2a18fe-5911-492f-94d2-c59b1fa03b69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5debe189-bc6b-496a-8056-4d3e7e787170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-591590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-591590 --output=json --layout=cluster: exit status 7 (309.554718ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-591590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-591590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:12:45.042689  494223 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-591590" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-591590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-591590 --output=json --layout=cluster: exit status 7 (307.054946ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-591590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-591590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:12:45.351381  494336 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-591590" does not appear in /home/jenkins/minikube-integration/21767-321838/kubeconfig
	E1025 10:12:45.362450  494336 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/insufficient-storage-591590/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-591590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-591590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-591590: (1.970233964s)
--- PASS: TestInsufficientStorage (10.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2528242399 start -p running-upgrade-774322 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2528242399 start -p running-upgrade-774322 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.88021556s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-774322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-774322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.619687641s)
helpers_test.go:175: Cleaning up "running-upgrade-774322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-774322
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-774322: (3.66501925s)
--- PASS: TestRunningBinaryUpgrade (49.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (309.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.012875239s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-311859
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-311859: (2.473788544s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-311859 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-311859 status --format={{.Host}}: exit status 7 (120.468762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.116128821s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-311859 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (95.419011ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-311859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-311859
	    minikube start -p kubernetes-upgrade-311859 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3118592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-311859 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-311859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.405171613s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-311859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-311859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-311859: (2.865136912s)
--- PASS: TestKubernetesUpgrade (309.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (100.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3743245548 start -p missing-upgrade-363411 --memory=3072 --driver=docker  --container-runtime=crio
E1025 10:12:52.799179  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/addons-582494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3743245548 start -p missing-upgrade-363411 --memory=3072 --driver=docker  --container-runtime=crio: (51.94050667s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-363411
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-363411: (1.72423972s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-363411
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-363411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-363411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.046586765s)
helpers_test.go:175: Cleaning up "missing-upgrade-363411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-363411
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-363411: (4.34718519s)
--- PASS: TestMissingContainerUpgrade (100.72s)

                                                
                                    
x
+
TestPause/serial/Start (56.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-200480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-200480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.602075196s)
--- PASS: TestPause/serial/Start (56.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (69.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3827542213 start -p stopped-upgrade-291164 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3827542213 start -p stopped-upgrade-291164 --memory=3072 --vm-driver=docker  --container-runtime=crio: (51.346728413s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3827542213 -p stopped-upgrade-291164 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3827542213 -p stopped-upgrade-291164 stop: (2.014351379s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-291164 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-291164 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.807709382s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (69.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-200480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-200480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.130139476s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-291164
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-291164: (1.105461376s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-099609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-099609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (116.176233ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-099609] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-099609 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-099609 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.683242621s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-099609 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-119085 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-119085 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (327.964057ms)

                                                
                                                
-- stdout --
	* [false-119085] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:14:33.629829  525677 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:14:33.630000  525677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:14:33.630009  525677 out.go:374] Setting ErrFile to fd 2...
	I1025 10:14:33.630017  525677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:14:33.630342  525677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-321838/.minikube/bin
	I1025 10:14:33.631065  525677 out.go:368] Setting JSON to false
	I1025 10:14:33.632415  525677 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7023,"bootTime":1761380251,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:14:33.632532  525677 start.go:141] virtualization: kvm guest
	I1025 10:14:33.634057  525677 out.go:179] * [false-119085] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:14:33.635895  525677 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:14:33.635922  525677 notify.go:220] Checking for updates...
	I1025 10:14:33.638962  525677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:14:33.640511  525677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-321838/kubeconfig
	I1025 10:14:33.642214  525677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-321838/.minikube
	I1025 10:14:33.643408  525677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:14:33.645203  525677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:14:33.648473  525677 config.go:182] Loaded profile config "NoKubernetes-099609": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:33.648625  525677 config.go:182] Loaded profile config "cert-expiration-160366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:33.648775  525677 config.go:182] Loaded profile config "kubernetes-upgrade-311859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:33.648917  525677 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:14:33.692788  525677 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:14:33.692918  525677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:14:33.842670  525677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 10:14:33.819858102 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:14:33.843190  525677 docker.go:318] overlay module found
	I1025 10:14:33.847660  525677 out.go:179] * Using the docker driver based on user configuration
	I1025 10:14:33.852522  525677 start.go:305] selected driver: docker
	I1025 10:14:33.852584  525677 start.go:925] validating driver "docker" against <nil>
	I1025 10:14:33.852602  525677 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:14:33.856492  525677 out.go:203] 
	W1025 10:14:33.858397  525677 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 10:14:33.860342  525677 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-119085 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-119085" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-099609
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-311859
contexts:
- context:
cluster: NoKubernetes-099609
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-099609
name: NoKubernetes-099609
- context:
cluster: kubernetes-upgrade-311859
user: kubernetes-upgrade-311859
name: kubernetes-upgrade-311859
current-context: NoKubernetes-099609
kind: Config
users:
- name: NoKubernetes-099609
user:
client-certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/NoKubernetes-099609/client.crt
client-key: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/NoKubernetes-099609/client.key
- name: kubernetes-upgrade-311859
user:
client-certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/client.crt
client-key: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-119085

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119085"

                                                
                                                
----------------------- debugLogs end: false-119085 [took: 5.457551599s] --------------------------------
helpers_test.go:175: Cleaning up "false-119085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-119085
--- PASS: TestNetworkPlugins/group/false (5.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-099609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-099609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.060121579s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-099609 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-099609 status -o json: exit status 2 (380.861541ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-099609","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-099609
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-099609: (2.206113526s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-099609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-099609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.304837003s)
--- PASS: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-099609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-099609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.074862ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (18.513985378s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (19.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-099609
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-099609: (1.296576264s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-099609 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-099609 --driver=docker  --container-runtime=crio: (6.86982165s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-099609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-099609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (313.638439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.09056552s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1025 10:16:14.980564  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (38.087717662s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-119085 "pgrep -a kubelet"
I1025 10:16:34.834020  325455 config.go:182] Loaded profile config "auto-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6bbrj" [85f17f0d-cec5-400d-a190-5c78903de9ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6bbrj" [85f17f0d-cec5-400d-a190-5c78903de9ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004374594s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-kcrk2" [2a79eddf-191d-4ca9-b886-e806ad9e71a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004444043s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-119085 "pgrep -a kubelet"
I1025 10:16:42.571210  325455 config.go:182] Loaded profile config "kindnet-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2mlz5" [f5a2450b-791c-40c9-8574-5c6d82136ce6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2mlz5" [f5a2450b-791c-40c9-8574-5c6d82136ce6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004164073s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.579278044s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.178575642s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-td58z" [ebc74a83-41b2-4b80-a9bb-d2437964ccd7] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-td58z" [ebc74a83-41b2-4b80-a9bb-d2437964ccd7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004114434s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-119085 "pgrep -a kubelet"
I1025 10:18:00.770505  325455 config.go:182] Loaded profile config "custom-flannel-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zxnqx" [0628bd6a-68a0-4585-a38a-af40a4bec24d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zxnqx" [0628bd6a-68a0-4585-a38a-af40a4bec24d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004686089s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-119085 "pgrep -a kubelet"
I1025 10:18:04.524381  325455 config.go:182] Loaded profile config "calico-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8k7fp" [c7cba37b-acf0-4ce6-a3f6-f7053a7a722c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8k7fp" [c7cba37b-acf0-4ce6-a3f6-f7053a7a722c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00338492s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (41.737457755s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.331318174s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-119085 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.287832096s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-119085 "pgrep -a kubelet"
I1025 10:18:50.491104  325455 config.go:182] Loaded profile config "enable-default-cni-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cds6q" [7eaa834a-51fd-425c-8ac7-325bfa1045e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cds6q" [7eaa834a-51fd-425c-8ac7-325bfa1045e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00490175s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.855997996s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-119085 "pgrep -a kubelet"
I1025 10:19:18.232740  325455 config.go:182] Loaded profile config "bridge-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4k89b" [a7df6613-55cf-4977-ae80-618bc6e958ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4k89b" [a7df6613-55cf-4977-ae80-618bc6e958ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003887117s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.531073753s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vqrdh" [c73aa9e7-63a0-4447-a8b2-37fcb3fc2d7a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004751511s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-119085 "pgrep -a kubelet"
I1025 10:19:32.314745  325455 config.go:182] Loaded profile config "flannel-119085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-119085 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jdc7b" [7dd50604-bb35-443b-a3a0-8101b5667519] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jdc7b" [7dd50604-bb35-443b-a3a0-8101b5667519] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003984333s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-119085 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-119085 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)
E1025 10:21:38.841976  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kindnet-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:40.191110  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.192824284s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-714798 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [419d2dd5-4eb7-49cf-a8cf-591e99689202] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [419d2dd5-4eb7-49cf-a8cf-591e99689202] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004576906s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-714798 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-714798 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-714798 --alsologtostderr -v=3: (17.481081171s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.134208225s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-899665 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ec5c2e6d-1ade-45df-8269-93809b94484b] Pending
helpers_test.go:352: "busybox" [ec5c2e6d-1ade-45df-8269-93809b94484b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ec5c2e6d-1ade-45df-8269-93809b94484b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00468706s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-899665 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798: exit status 7 (118.062361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-714798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-714798 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.617442723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-714798 -n old-k8s-version-714798
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-899665 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-899665 --alsologtostderr -v=3: (16.658730379s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [15f6b26e-81c5-48eb-9bd1-5674b56ca028] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [15f6b26e-81c5-48eb-9bd1-5674b56ca028] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00450834s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-667966 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-667966 --alsologtostderr -v=3: (8.124472578s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966: exit status 7 (98.672049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-667966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-667966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.71081061s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-667966 -n newest-cni-667966
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-767846 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-767846 --alsologtostderr -v=3: (16.502892304s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665: exit status 7 (116.676359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-899665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-899665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.976850482s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-899665 -n no-preload-899665
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-667966 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846: exit status 7 (98.902117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-767846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-767846 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.303988008s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-767846 -n default-k8s-diff-port-767846
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:21:14.980151  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/functional-558764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.639198049s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mshs4" [633c266d-f837-432b-843f-b86244518663] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004904976s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mshs4" [633c266d-f837-432b-843f-b86244518663] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004024596s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-714798 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-714798 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zv5c" [1c1c50ff-70c9-457a-a5e5-dd294a77f730] Running
E1025 10:21:41.404170  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kindnet-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:45.312972  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:46.526581  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kindnet-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003939868s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zv5c" [1c1c50ff-70c9-457a-a5e5-dd294a77f730] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004669158s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-899665 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wzpft" [f628496a-a0ef-4646-bd5b-6469e37ccbd4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004718235s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-899665 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wzpft" [f628496a-a0ef-4646-bd5b-6469e37ccbd4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003230272s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-767846 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-683681 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4edb7a57-15b4-4297-899b-96dd0dc4a482] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4edb7a57-15b4-4297-899b-96dd0dc4a482] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004874779s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-683681 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-767846 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-683681 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-683681 --alsologtostderr -v=3: (18.131036366s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681: exit status 7 (88.403605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-683681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:22:56.998444  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/auto-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.184476  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.190977  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.202424  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.212959  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kindnet-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.224479  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.266046  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.347587  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.509383  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:58.831211  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:22:59.473004  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:00.754499  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:00.962456  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:00.968954  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:00.980436  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:01.001999  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:01.043500  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:01.125102  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:01.287108  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:01.609230  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:02.251409  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:03.316002  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:03.533717  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:06.095144  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:08.437729  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:11.217533  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-683681 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.425913542s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-683681 -n embed-certs-683681
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b2cmv" [104da91e-df0f-49a9-bf95-7fd18378292d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004544282s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b2cmv" [104da91e-df0f-49a9-bf95-7fd18378292d] Running
E1025 10:23:18.679578  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/calico-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:23:21.459334  325455 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/custom-flannel-119085/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004177664s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-683681 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-683681 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (26/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-119085 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-119085" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-690950
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-311859
contexts:
- context:
cluster: force-systemd-env-690950
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-env-690950
name: force-systemd-env-690950
- context:
cluster: kubernetes-upgrade-311859
user: kubernetes-upgrade-311859
name: kubernetes-upgrade-311859
current-context: kubernetes-upgrade-311859
kind: Config
users:
- name: force-systemd-env-690950
user:
client-certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/force-systemd-env-690950/client.crt
client-key: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/force-systemd-env-690950/client.key
- name: kubernetes-upgrade-311859
user:
client-certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/client.crt
client-key: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-119085

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119085"

                                                
                                                
----------------------- debugLogs end: kubenet-119085 [took: 5.221762185s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-119085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-119085
--- SKIP: TestNetworkPlugins/group/kubenet (5.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-119085 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-119085" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-099609
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-321838/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-311859
contexts:
- context:
cluster: NoKubernetes-099609
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-099609
name: NoKubernetes-099609
- context:
cluster: kubernetes-upgrade-311859
user: kubernetes-upgrade-311859
name: kubernetes-upgrade-311859
current-context: NoKubernetes-099609
kind: Config
users:
- name: NoKubernetes-099609
user:
client-certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/NoKubernetes-099609/client.crt
client-key: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/NoKubernetes-099609/client.key
- name: kubernetes-upgrade-311859
user:
client-certificate: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/client.crt
client-key: /home/jenkins/minikube-integration/21767-321838/.minikube/profiles/kubernetes-upgrade-311859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-119085

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-119085" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119085"

                                                
                                                
----------------------- debugLogs end: cilium-119085 [took: 3.830064646s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-119085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-119085
--- SKIP: TestNetworkPlugins/group/cilium (4.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-805899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-805899
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard